00:00:00.000 Started by upstream project "autotest-per-patch" build number 127132 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.031 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.364 The recommended git tool is: git 00:00:01.365 using credential 00000000-0000-0000-0000-000000000002 00:00:01.367 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.377 Fetching changes from the remote Git repository 00:00:01.380 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.391 Using shallow fetch with depth 1 00:00:01.391 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.391 > git --version # timeout=10 00:00:01.401 > git --version # 'git version 2.39.2' 00:00:01.402 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.412 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.412 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.057 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.069 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.083 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:07.083 > git config core.sparsecheckout # timeout=10 00:00:07.096 > git read-tree -mu HEAD # timeout=10 00:00:07.113 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:07.144 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:07.144 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:07.243 [Pipeline] Start of Pipeline 00:00:07.252 [Pipeline] library 00:00:07.253 Loading library shm_lib@master 00:00:07.253 Library shm_lib@master is cached. Copying from home. 00:00:07.265 [Pipeline] node 00:00:07.272 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.273 [Pipeline] { 00:00:07.281 [Pipeline] catchError 00:00:07.282 [Pipeline] { 00:00:07.290 [Pipeline] wrap 00:00:07.296 [Pipeline] { 00:00:07.301 [Pipeline] stage 00:00:07.302 [Pipeline] { (Prologue) 00:00:07.429 [Pipeline] sh 00:00:07.716 + logger -p user.info -t JENKINS-CI 00:00:07.731 [Pipeline] echo 00:00:07.732 Node: CYP9 00:00:07.737 [Pipeline] sh 00:00:08.037 [Pipeline] setCustomBuildProperty 00:00:08.045 [Pipeline] echo 00:00:08.046 Cleanup processes 00:00:08.050 [Pipeline] sh 00:00:08.332 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.333 3953701 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.347 [Pipeline] sh 00:00:08.634 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.634 ++ grep -v 'sudo pgrep' 00:00:08.634 ++ awk '{print $1}' 00:00:08.634 + sudo kill -9 00:00:08.634 + true 00:00:08.650 [Pipeline] cleanWs 00:00:08.660 [WS-CLEANUP] Deleting project workspace... 00:00:08.660 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.666 [WS-CLEANUP] done 00:00:08.669 [Pipeline] setCustomBuildProperty 00:00:08.683 [Pipeline] sh 00:00:08.964 + sudo git config --global --replace-all safe.directory '*' 00:00:09.028 [Pipeline] httpRequest 00:00:09.054 [Pipeline] echo 00:00:09.056 Sorcerer 10.211.164.101 is alive 00:00:09.062 [Pipeline] httpRequest 00:00:09.066 HttpMethod: GET 00:00:09.067 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.068 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.075 Response Code: HTTP/1.1 200 OK 00:00:09.075 Success: Status code 200 is in the accepted range: 200,404 00:00:09.076 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:14.869 [Pipeline] sh 00:00:15.157 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:15.173 [Pipeline] httpRequest 00:00:15.191 [Pipeline] echo 00:00:15.193 Sorcerer 10.211.164.101 is alive 00:00:15.200 [Pipeline] httpRequest 00:00:15.205 HttpMethod: GET 00:00:15.206 URL: http://10.211.164.101/packages/spdk_223450b479bfaa60ffadfa3bb6f8e28a73f706c2.tar.gz 00:00:15.206 Sending request to url: http://10.211.164.101/packages/spdk_223450b479bfaa60ffadfa3bb6f8e28a73f706c2.tar.gz 00:00:15.220 Response Code: HTTP/1.1 200 OK 00:00:15.221 Success: Status code 200 is in the accepted range: 200,404 00:00:15.221 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_223450b479bfaa60ffadfa3bb6f8e28a73f706c2.tar.gz 00:02:39.782 [Pipeline] sh 00:02:40.079 + tar --no-same-owner -xf spdk_223450b479bfaa60ffadfa3bb6f8e28a73f706c2.tar.gz 00:02:42.639 [Pipeline] sh 00:02:42.925 + git -C spdk log --oneline -n5 00:02:42.926 223450b47 lib/event: Add support for core isolation in scheduling 00:02:42.926 6a0934c18 lib/event: Modify spdk_reactor_set_interrupt_mode() to be called from scheduling reactor 00:02:42.926 d005e023b raid: fix empty slot not updated in sb after resize 00:02:42.926 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:02:42.926 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:02:42.938 [Pipeline] } 00:02:42.954 [Pipeline] // stage 00:02:42.964 [Pipeline] stage 00:02:42.967 [Pipeline] { (Prepare) 00:02:42.984 [Pipeline] writeFile 00:02:43.000 [Pipeline] sh 00:02:43.286 + logger -p user.info -t JENKINS-CI 00:02:43.300 [Pipeline] sh 00:02:43.586 + logger -p user.info -t JENKINS-CI 00:02:43.599 [Pipeline] sh 00:02:43.885 + cat autorun-spdk.conf 00:02:43.885 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.885 SPDK_TEST_NVMF=1 00:02:43.885 SPDK_TEST_NVME_CLI=1 00:02:43.885 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.885 SPDK_TEST_NVMF_NICS=e810 00:02:43.885 SPDK_TEST_VFIOUSER=1 00:02:43.885 SPDK_RUN_UBSAN=1 00:02:43.885 NET_TYPE=phy 00:02:43.893 RUN_NIGHTLY=0 00:02:43.897 [Pipeline] readFile 00:02:43.922 [Pipeline] withEnv 00:02:43.924 [Pipeline] { 00:02:43.937 [Pipeline] sh 00:02:44.224 + set -ex 00:02:44.224 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:44.224 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.224 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.224 ++ SPDK_TEST_NVMF=1 00:02:44.224 ++ SPDK_TEST_NVME_CLI=1 00:02:44.224 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:44.224 ++ SPDK_TEST_NVMF_NICS=e810 00:02:44.224 ++ SPDK_TEST_VFIOUSER=1 00:02:44.224 ++ SPDK_RUN_UBSAN=1 00:02:44.224 ++ NET_TYPE=phy 00:02:44.224 ++ RUN_NIGHTLY=0 00:02:44.224 + case $SPDK_TEST_NVMF_NICS in 00:02:44.224 + DRIVERS=ice 00:02:44.224 + [[ tcp == \r\d\m\a ]] 00:02:44.224 + [[ -n ice ]] 00:02:44.224 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:44.225 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:44.225 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:44.225 rmmod: ERROR: Module irdma is not currently loaded 00:02:44.225 rmmod: ERROR: Module i40iw is not currently loaded 00:02:44.225 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:44.225 + true 00:02:44.225 + for D in $DRIVERS 00:02:44.225 + sudo modprobe ice 00:02:44.225 + exit 0 00:02:44.235 [Pipeline] } 00:02:44.258 [Pipeline] // withEnv 00:02:44.263 [Pipeline] } 00:02:44.281 [Pipeline] // stage 00:02:44.290 [Pipeline] catchError 00:02:44.292 [Pipeline] { 00:02:44.306 [Pipeline] timeout 00:02:44.306 Timeout set to expire in 50 min 00:02:44.307 [Pipeline] { 00:02:44.318 [Pipeline] stage 00:02:44.320 [Pipeline] { (Tests) 00:02:44.332 [Pipeline] sh 00:02:44.620 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.620 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.620 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.620 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:44.620 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.620 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:44.620 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:44.620 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:44.620 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:44.620 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:44.620 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:44.620 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.620 + source /etc/os-release 00:02:44.620 ++ NAME='Fedora Linux' 00:02:44.620 ++ VERSION='38 (Cloud Edition)' 00:02:44.620 ++ ID=fedora 00:02:44.620 ++ VERSION_ID=38 00:02:44.620 ++ VERSION_CODENAME= 00:02:44.620 ++ PLATFORM_ID=platform:f38 00:02:44.620 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:44.620 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:44.620 ++ LOGO=fedora-logo-icon 00:02:44.620 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:44.620 ++ HOME_URL=https://fedoraproject.org/ 00:02:44.620 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:44.620 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:44.620 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:44.620 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:44.620 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:44.620 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:44.620 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:44.620 ++ SUPPORT_END=2024-05-14 00:02:44.620 ++ VARIANT='Cloud Edition' 00:02:44.620 ++ VARIANT_ID=cloud 00:02:44.620 + uname -a 00:02:44.620 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:44.620 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:47.167 Hugepages 00:02:47.167 node hugesize free / total 00:02:47.167 node0 1048576kB 0 / 0 00:02:47.167 node0 2048kB 0 / 0 00:02:47.167 node1 1048576kB 0 / 0 00:02:47.167 node1 2048kB 0 / 0 00:02:47.167 00:02:47.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:47.167 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:47.167 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:47.167 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:47.167 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:47.167 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:47.167 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:47.167 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:47.167 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:47.429 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:47.429 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:47.429 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:47.429 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:47.429 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:47.429 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:47.429 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:47.429 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:47.429 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:47.429 + rm -f /tmp/spdk-ld-path 00:02:47.429 + source autorun-spdk.conf 00:02:47.429 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.429 ++ SPDK_TEST_NVMF=1 00:02:47.429 ++ SPDK_TEST_NVME_CLI=1 00:02:47.429 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.429 ++ SPDK_TEST_NVMF_NICS=e810 00:02:47.429 ++ SPDK_TEST_VFIOUSER=1 00:02:47.429 ++ SPDK_RUN_UBSAN=1 00:02:47.429 ++ NET_TYPE=phy 00:02:47.429 ++ RUN_NIGHTLY=0 00:02:47.429 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:47.429 + [[ -n '' ]] 00:02:47.429 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.429 + for M in /var/spdk/build-*-manifest.txt 00:02:47.429 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:47.429 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:47.429 + for M in /var/spdk/build-*-manifest.txt 00:02:47.429 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:47.429 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:47.429 ++ uname 00:02:47.429 + [[ Linux == \L\i\n\u\x ]] 00:02:47.429 + sudo dmesg -T 00:02:47.429 + sudo dmesg --clear 00:02:47.429 + dmesg_pid=3954678 00:02:47.429 + [[ Fedora Linux == FreeBSD ]] 00:02:47.429 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.429 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.429 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:47.429 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:47.429 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:47.429 + [[ -x /usr/src/fio-static/fio ]] 00:02:47.429 + export FIO_BIN=/usr/src/fio-static/fio 00:02:47.429 + FIO_BIN=/usr/src/fio-static/fio 00:02:47.429 + sudo dmesg -Tw 00:02:47.429 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:47.429 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:47.429 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:47.429 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.429 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.429 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:47.429 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.429 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.429 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:47.429 Test configuration: 00:02:47.429 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.429 SPDK_TEST_NVMF=1 00:02:47.429 SPDK_TEST_NVME_CLI=1 00:02:47.429 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.429 SPDK_TEST_NVMF_NICS=e810 00:02:47.429 SPDK_TEST_VFIOUSER=1 00:02:47.429 SPDK_RUN_UBSAN=1 00:02:47.429 NET_TYPE=phy 00:02:47.692 RUN_NIGHTLY=0 07:08:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:47.692 07:08:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:47.692 07:08:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.692 07:08:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.692 07:08:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.692 07:08:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.692 07:08:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.692 07:08:54 -- paths/export.sh@5 -- $ export PATH 00:02:47.692 07:08:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.692 07:08:54 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:47.692 07:08:54 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:47.692 07:08:54 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721884134.XXXXXX 00:02:47.692 07:08:54 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721884134.eOQS0s 00:02:47.692 07:08:54 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:47.692 07:08:54 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:47.692 07:08:54 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:47.692 07:08:54 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:47.692 07:08:54 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:47.692 07:08:54 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:47.692 07:08:54 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:47.692 07:08:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.692 07:08:54 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:47.692 07:08:54 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:47.692 07:08:54 -- pm/common@17 -- $ local monitor 00:02:47.692 07:08:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.692 07:08:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.692 07:08:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.692 07:08:54 -- pm/common@21 -- $ date +%s 00:02:47.692 07:08:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.692 07:08:54 -- pm/common@25 -- $ sleep 1 00:02:47.692 07:08:54 -- pm/common@21 -- $ date +%s 00:02:47.692 07:08:54 -- pm/common@21 -- $ date +%s 00:02:47.692 07:08:54 -- pm/common@21 -- $ date +%s 00:02:47.692 07:08:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884134 00:02:47.692 07:08:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884134 00:02:47.692 07:08:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884134 00:02:47.692 07:08:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721884134 00:02:47.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884134_collect-vmstat.pm.log 00:02:47.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884134_collect-cpu-load.pm.log 00:02:47.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884134_collect-cpu-temp.pm.log 00:02:47.692 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721884134_collect-bmc-pm.bmc.pm.log 00:02:48.635 07:08:55 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:48.635 07:08:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:48.635 07:08:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:48.635 07:08:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.635 07:08:55 -- spdk/autobuild.sh@16 -- $ date -u 00:02:48.635 Thu Jul 25 05:08:55 AM UTC 2024 00:02:48.635 07:08:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:48.635 v24.09-pre-320-g223450b47 00:02:48.635 07:08:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:48.635 07:08:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:48.635 07:08:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:48.635 07:08:55 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:48.635 07:08:55 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:48.635 07:08:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:48.635 ************************************ 00:02:48.635 START TEST ubsan 00:02:48.635 ************************************ 00:02:48.635 07:08:55 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:48.635 using ubsan 00:02:48.635 00:02:48.635 real 0m0.000s 00:02:48.635 user 0m0.000s 00:02:48.635 sys 0m0.000s 00:02:48.635 07:08:55 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:48.635 07:08:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:48.635 ************************************ 00:02:48.635 END TEST ubsan 00:02:48.635 ************************************ 00:02:48.897 07:08:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:48.897 07:08:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:48.897 07:08:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:48.897 07:08:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:48.897 07:08:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:48.897 07:08:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:48.897 07:08:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:48.897 07:08:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:48.897 07:08:56 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:48.897 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:48.897 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:49.158 Using 'verbs' RDMA provider 00:03:05.055 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:17.293 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:17.293 Creating mk/config.mk...done. 00:03:17.293 Creating mk/cc.flags.mk...done. 00:03:17.293 Type 'make' to build. 00:03:17.293 07:09:24 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:03:17.293 07:09:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:17.293 07:09:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:17.293 07:09:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.293 ************************************ 00:03:17.293 START TEST make 00:03:17.293 ************************************ 00:03:17.293 07:09:24 make -- common/autotest_common.sh@1125 -- $ make -j144 00:03:17.293 make[1]: Nothing to be done for 'all'. 00:03:18.679 The Meson build system 00:03:18.679 Version: 1.3.1 00:03:18.679 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:18.679 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:18.679 Build type: native build 00:03:18.679 Project name: libvfio-user 00:03:18.679 Project version: 0.0.1 00:03:18.679 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:18.679 C linker for the host machine: cc ld.bfd 2.39-16 00:03:18.679 Host machine cpu family: x86_64 00:03:18.679 Host machine cpu: x86_64 00:03:18.679 Run-time dependency threads found: YES 00:03:18.679 Library dl found: YES 00:03:18.679 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:18.679 Run-time dependency json-c found: YES 0.17 00:03:18.679 Run-time dependency cmocka found: YES 1.1.7 00:03:18.679 Program pytest-3 found: NO 00:03:18.679 Program flake8 found: NO 00:03:18.679 Program misspell-fixer found: NO 00:03:18.679 Program restructuredtext-lint found: NO 00:03:18.679 Program valgrind found: YES (/usr/bin/valgrind) 00:03:18.679 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:18.679 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:18.679 Compiler for C supports arguments -Wwrite-strings: YES 00:03:18.679 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:18.679 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:18.679 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:18.679 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:18.679 Build targets in project: 8 00:03:18.679 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:18.679 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:18.679 00:03:18.679 libvfio-user 0.0.1 00:03:18.679 00:03:18.679 User defined options 00:03:18.679 buildtype : debug 00:03:18.679 default_library: shared 00:03:18.679 libdir : /usr/local/lib 00:03:18.679 00:03:18.679 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:18.679 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:18.939 [1/37] Compiling C object samples/null.p/null.c.o 00:03:18.939 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:18.939 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:18.939 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:18.939 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:18.939 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:18.939 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:18.939 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:18.939 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:18.939 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:18.939 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:18.939 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:18.939 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:18.939 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:18.939 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:18.939 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:18.939 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:18.939 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:18.939 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:18.939 [20/37] Compiling C object samples/server.p/server.c.o 00:03:18.939 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:18.939 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:18.939 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:18.939 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:18.939 [25/37] Compiling C object samples/client.p/client.c.o 00:03:18.939 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:18.939 [27/37] Linking target samples/client 00:03:18.939 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:18.939 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:18.939 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:18.939 [31/37] Linking target test/unit_tests 00:03:19.198 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:19.198 [33/37] Linking target samples/server 00:03:19.198 [34/37] Linking target samples/null 00:03:19.198 [35/37] Linking target samples/gpio-pci-idio-16 00:03:19.198 [36/37] Linking target samples/lspci 00:03:19.198 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:19.198 INFO: autodetecting backend as ninja 00:03:19.198 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:19.198 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:19.458 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:19.458 ninja: no work to do. 00:03:26.060 The Meson build system 00:03:26.060 Version: 1.3.1 00:03:26.060 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:26.060 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:26.060 Build type: native build 00:03:26.060 Program cat found: YES (/usr/bin/cat) 00:03:26.060 Project name: DPDK 00:03:26.060 Project version: 24.03.0 00:03:26.060 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:26.060 C linker for the host machine: cc ld.bfd 2.39-16 00:03:26.060 Host machine cpu family: x86_64 00:03:26.060 Host machine cpu: x86_64 00:03:26.060 Message: ## Building in Developer Mode ## 00:03:26.060 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:26.061 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:26.061 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:26.061 Program python3 found: YES (/usr/bin/python3) 00:03:26.061 Program cat found: YES (/usr/bin/cat) 00:03:26.061 Compiler for C supports arguments -march=native: YES 00:03:26.061 Checking for size of "void *" : 8 00:03:26.061 Checking for size of "void *" : 8 (cached) 00:03:26.061 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:26.061 Library m found: YES 00:03:26.061 Library numa found: YES 00:03:26.061 Has header "numaif.h" : YES 00:03:26.061 Library fdt found: NO 00:03:26.061 Library execinfo found: NO 00:03:26.061 Has header "execinfo.h" : YES 00:03:26.061 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:26.061 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:26.061 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:26.061 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:26.061 Run-time dependency openssl found: YES 3.0.9 00:03:26.061 Run-time dependency libpcap found: YES 1.10.4 00:03:26.061 Has header "pcap.h" with dependency libpcap: YES 00:03:26.061 Compiler for C supports arguments -Wcast-qual: YES 00:03:26.061 Compiler for C supports arguments -Wdeprecated: YES 00:03:26.061 Compiler for C supports arguments -Wformat: YES 00:03:26.061 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:26.061 Compiler for C supports arguments -Wformat-security: NO 00:03:26.061 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:26.061 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:26.061 Compiler for C supports arguments -Wnested-externs: YES 00:03:26.061 Compiler for C supports arguments -Wold-style-definition: YES 00:03:26.061 Compiler for C supports arguments -Wpointer-arith: YES 00:03:26.061 Compiler for C supports arguments -Wsign-compare: YES 00:03:26.061 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:26.061 Compiler for C supports arguments -Wundef: YES 00:03:26.061 Compiler for C supports arguments -Wwrite-strings: YES 00:03:26.061 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:26.061 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:26.061 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:26.061 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:26.061 Program objdump found: YES (/usr/bin/objdump) 00:03:26.061 Compiler for C supports arguments -mavx512f: YES 00:03:26.061 Checking if "AVX512 checking" compiles: YES 00:03:26.061 Fetching value of define "__SSE4_2__" : 1 00:03:26.061 Fetching value of define "__AES__" : 1 00:03:26.061 Fetching value of define "__AVX__" : 1 00:03:26.061 Fetching value of define "__AVX2__" : 1 00:03:26.061 Fetching value of define "__AVX512BW__" : 1 00:03:26.061 Fetching value of define "__AVX512CD__" : 1 00:03:26.061 Fetching value of define "__AVX512DQ__" : 1 00:03:26.061 Fetching value of define "__AVX512F__" : 1 00:03:26.061 Fetching value of define "__AVX512VL__" : 1 00:03:26.061 Fetching value of define "__PCLMUL__" : 1 00:03:26.061 Fetching value of define "__RDRND__" : 1 00:03:26.061 Fetching value of define "__RDSEED__" : 1 00:03:26.061 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:26.061 Fetching value of define "__znver1__" : (undefined) 00:03:26.061 Fetching value of define "__znver2__" : (undefined) 00:03:26.061 Fetching value of define "__znver3__" : (undefined) 00:03:26.061 Fetching value of define "__znver4__" : (undefined) 00:03:26.061 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:26.061 Message: lib/log: Defining dependency "log" 00:03:26.061 Message: lib/kvargs: Defining dependency "kvargs" 00:03:26.061 Message: lib/telemetry: Defining dependency "telemetry" 00:03:26.061 Checking for function "getentropy" : NO 00:03:26.061 Message: lib/eal: Defining dependency "eal" 00:03:26.061 Message: lib/ring: Defining dependency "ring" 00:03:26.061 Message: lib/rcu: Defining dependency "rcu" 00:03:26.061 Message: lib/mempool: Defining dependency "mempool" 00:03:26.061 Message: lib/mbuf: Defining dependency "mbuf" 00:03:26.061 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:26.061 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:26.061 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:26.061 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:26.061 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:26.061 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:26.061 Compiler for C supports arguments -mpclmul: YES 00:03:26.061 Compiler for C supports arguments -maes: YES 00:03:26.061 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:26.061 Compiler for C supports arguments -mavx512bw: YES 00:03:26.061 Compiler for C supports arguments -mavx512dq: YES 00:03:26.061 Compiler for C supports arguments -mavx512vl: YES 00:03:26.061 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:26.061 Compiler for C supports arguments -mavx2: YES 00:03:26.061 Compiler for C supports arguments -mavx: YES 00:03:26.061 Message: lib/net: Defining dependency "net" 00:03:26.061 Message: lib/meter: Defining dependency "meter" 00:03:26.061 Message: lib/ethdev: Defining dependency "ethdev" 00:03:26.061 Message: lib/pci: Defining dependency "pci" 00:03:26.061 Message: lib/cmdline: Defining dependency "cmdline" 00:03:26.061 Message: lib/hash: Defining dependency "hash" 00:03:26.061 Message: lib/timer: Defining dependency "timer" 00:03:26.061 Message: lib/compressdev: Defining dependency "compressdev" 00:03:26.061 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:26.061 Message: lib/dmadev: Defining dependency "dmadev" 00:03:26.061 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:26.061 Message: lib/power: Defining dependency "power" 00:03:26.061 Message: lib/reorder: Defining dependency "reorder" 00:03:26.061 Message: lib/security: Defining dependency "security" 00:03:26.061 Has header "linux/userfaultfd.h" : YES 00:03:26.061 Has header "linux/vduse.h" : YES 00:03:26.061 Message: lib/vhost: Defining dependency "vhost" 00:03:26.061 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:26.061 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:26.061 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:26.061 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:26.061 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:26.061 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:26.061 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:26.061 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:26.061 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:26.061 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:26.061 Program doxygen found: YES (/usr/bin/doxygen) 00:03:26.061 Configuring doxy-api-html.conf using configuration 00:03:26.061 Configuring doxy-api-man.conf using configuration 00:03:26.061 Program mandb found: YES (/usr/bin/mandb) 00:03:26.061 Program sphinx-build found: NO 00:03:26.061 Configuring rte_build_config.h using configuration 00:03:26.061 Message: 00:03:26.061 ================= 00:03:26.061 Applications Enabled 00:03:26.061 ================= 00:03:26.061 00:03:26.061 apps: 00:03:26.061 00:03:26.061 00:03:26.061 Message: 00:03:26.061 ================= 00:03:26.061 Libraries Enabled 00:03:26.061 ================= 00:03:26.061 00:03:26.061 libs: 00:03:26.061 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:26.061 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:26.061 cryptodev, dmadev, power, reorder, security, vhost, 00:03:26.061 00:03:26.061 Message: 00:03:26.061 =============== 00:03:26.061 Drivers Enabled 00:03:26.061 =============== 00:03:26.061 00:03:26.061 common: 00:03:26.061 00:03:26.061 bus: 00:03:26.061 pci, vdev, 00:03:26.061 mempool: 00:03:26.061 ring, 00:03:26.061 dma: 00:03:26.061 00:03:26.061 net: 00:03:26.061 00:03:26.061 crypto: 00:03:26.061 00:03:26.061 compress: 00:03:26.061 00:03:26.061 vdpa: 00:03:26.061 00:03:26.061 00:03:26.061 Message: 00:03:26.061 ================= 00:03:26.061 Content Skipped 00:03:26.061 ================= 00:03:26.061 00:03:26.061 apps: 00:03:26.061 dumpcap: explicitly disabled via build config 00:03:26.061 graph: explicitly disabled via build config 00:03:26.061 pdump: explicitly disabled via build config 00:03:26.061 proc-info: explicitly disabled via build config 00:03:26.061 test-acl: explicitly disabled via build config 00:03:26.061 test-bbdev: explicitly disabled via build config 00:03:26.061 test-cmdline: explicitly disabled via build config 00:03:26.061 test-compress-perf: explicitly disabled via build config 00:03:26.061 test-crypto-perf: explicitly disabled via build config 00:03:26.061 test-dma-perf: explicitly disabled via build config 00:03:26.061 test-eventdev: explicitly disabled via build config 00:03:26.061 test-fib: explicitly disabled via build config 00:03:26.061 test-flow-perf: explicitly disabled via build config 00:03:26.061 test-gpudev: explicitly disabled via build config 00:03:26.061 test-mldev: explicitly disabled via build config 00:03:26.061 test-pipeline: explicitly disabled via build config 00:03:26.061 test-pmd: explicitly disabled via build config 00:03:26.061 test-regex: explicitly disabled via build config 00:03:26.061 test-sad: explicitly disabled via build config 00:03:26.061 test-security-perf: explicitly disabled via build config 00:03:26.061 00:03:26.061 libs: 00:03:26.061 argparse: explicitly disabled via build config 00:03:26.061 metrics: explicitly disabled via build config 00:03:26.061 acl: explicitly disabled via build config 00:03:26.061 bbdev: explicitly disabled via build config 00:03:26.061 bitratestats: explicitly disabled via build config 00:03:26.061 bpf: explicitly disabled via build config 00:03:26.061 cfgfile: explicitly disabled via build config 00:03:26.061 distributor: explicitly disabled via build config 00:03:26.062 efd: explicitly disabled via build config 00:03:26.062 eventdev: explicitly disabled via build config 00:03:26.062 dispatcher: explicitly disabled via build config 00:03:26.062 gpudev: explicitly disabled via build config 00:03:26.062 gro: explicitly disabled via build config 00:03:26.062 gso: explicitly disabled via build config 00:03:26.062 ip_frag: explicitly disabled via build config 00:03:26.062 jobstats: explicitly disabled via build config 00:03:26.062 latencystats: explicitly disabled via build config 00:03:26.062 lpm: explicitly disabled via build config 00:03:26.062 member: explicitly disabled via build config 00:03:26.062 pcapng: explicitly disabled via build config 00:03:26.062 rawdev: explicitly disabled via build config 00:03:26.062 regexdev: explicitly disabled via build config 00:03:26.062 mldev: explicitly disabled via build config 00:03:26.062 rib: explicitly disabled via build config 00:03:26.062 sched: explicitly disabled via build config 00:03:26.062 stack: explicitly disabled via build config 00:03:26.062 ipsec: explicitly disabled via build config 00:03:26.062 pdcp: explicitly disabled via build config 00:03:26.062 fib: explicitly disabled via build config 00:03:26.062 port: explicitly disabled via build config 00:03:26.062 pdump: explicitly disabled via build config 00:03:26.062 table: explicitly disabled via build config 00:03:26.062 pipeline: explicitly disabled via build config 00:03:26.062 graph: explicitly disabled via build config 00:03:26.062 node: explicitly disabled via build config 00:03:26.062 00:03:26.062 drivers: 00:03:26.062 common/cpt: not in enabled drivers build config 00:03:26.062 common/dpaax: not in enabled drivers build config 00:03:26.062 common/iavf: not in enabled drivers build config 00:03:26.062 common/idpf: not in enabled drivers build config 00:03:26.062 common/ionic: not in enabled drivers build config 00:03:26.062 common/mvep: not in enabled drivers build config 00:03:26.062 common/octeontx: not in enabled drivers build config 00:03:26.062 bus/auxiliary: not in enabled drivers build config 00:03:26.062 bus/cdx: not in enabled drivers build config 00:03:26.062 bus/dpaa: not in enabled drivers build config 00:03:26.062 bus/fslmc: not in enabled drivers build config 00:03:26.062 bus/ifpga: not in enabled drivers build config 00:03:26.062 bus/platform: not in enabled drivers build config 00:03:26.062 bus/uacce: not in enabled drivers build config 00:03:26.062 bus/vmbus: not in enabled drivers build config 00:03:26.062 common/cnxk: not in enabled drivers build config 00:03:26.062 common/mlx5: not in enabled drivers build config 00:03:26.062 common/nfp: not in enabled drivers build config 00:03:26.062 common/nitrox: not in enabled drivers build config 00:03:26.062 common/qat: not in enabled drivers build config 00:03:26.062 common/sfc_efx: not in enabled drivers build config 00:03:26.062 mempool/bucket: not in enabled drivers build config 00:03:26.062 mempool/cnxk: not in enabled drivers build config 00:03:26.062 mempool/dpaa: not in enabled drivers build config 00:03:26.062 mempool/dpaa2: not in enabled drivers build config 00:03:26.062 mempool/octeontx: not in enabled drivers build config 00:03:26.062 mempool/stack: not in enabled drivers build config 00:03:26.062 dma/cnxk: not in enabled drivers build config 00:03:26.062 dma/dpaa: not in enabled drivers build config 00:03:26.062 dma/dpaa2: not in enabled drivers build config 00:03:26.062 dma/hisilicon: not in enabled drivers build config 00:03:26.062 dma/idxd: not in enabled drivers build config 00:03:26.062 dma/ioat: not in enabled drivers build config 00:03:26.062 dma/skeleton: not in enabled drivers build config 00:03:26.062 net/af_packet: not in enabled drivers build config 00:03:26.062 net/af_xdp: not in enabled drivers build config 00:03:26.062 net/ark: not in enabled drivers build config 00:03:26.062 net/atlantic: not in enabled drivers build config 00:03:26.062 net/avp: not in enabled drivers build config 00:03:26.062 net/axgbe: not in enabled drivers build config 00:03:26.062 net/bnx2x: not in enabled drivers build config 00:03:26.062 net/bnxt: not in enabled drivers build config 00:03:26.062 net/bonding: not in enabled drivers build config 00:03:26.062 net/cnxk: not in enabled drivers build config 00:03:26.062 net/cpfl: not in enabled drivers build config 00:03:26.062 net/cxgbe: not in enabled drivers build config 00:03:26.062 net/dpaa: not in enabled drivers build config 00:03:26.062 net/dpaa2: not in enabled drivers build config 00:03:26.062 net/e1000: not in enabled drivers build config 00:03:26.062 net/ena: not in enabled drivers build config 00:03:26.062 net/enetc: not in enabled drivers build config 00:03:26.062 net/enetfec: not in enabled drivers build config 00:03:26.062 net/enic: not in enabled drivers build config 00:03:26.062 net/failsafe: not in enabled drivers build config 00:03:26.062 net/fm10k: not in enabled drivers build config 00:03:26.062 net/gve: not in enabled drivers build config 00:03:26.062 net/hinic: not in enabled drivers build config 00:03:26.062 net/hns3: not in enabled drivers build config 00:03:26.062 net/i40e: not in enabled drivers build config 00:03:26.062 net/iavf: not in enabled drivers build config 00:03:26.062 net/ice: not in enabled drivers build config 00:03:26.062 net/idpf: not in enabled drivers build config 00:03:26.062 net/igc: not in enabled drivers build config 00:03:26.062 net/ionic: not in enabled drivers build config 00:03:26.062 net/ipn3ke: not in enabled drivers build config 00:03:26.062 net/ixgbe: not in enabled drivers build config 00:03:26.062 net/mana: not in enabled drivers build config 00:03:26.062 net/memif: not in enabled drivers build config 00:03:26.062 net/mlx4: not in enabled drivers build config 00:03:26.062 net/mlx5: not in enabled drivers build config 00:03:26.062 net/mvneta: not in enabled drivers build config 00:03:26.062 net/mvpp2: not in enabled drivers build config 00:03:26.062 net/netvsc: not in enabled drivers build config 00:03:26.062 net/nfb: not in enabled drivers build config 00:03:26.062 net/nfp: not in enabled drivers build config 00:03:26.062 net/ngbe: not in enabled drivers build config 00:03:26.062 net/null: not in enabled drivers build config 00:03:26.062 net/octeontx: not in enabled drivers build config 00:03:26.062 net/octeon_ep: not in enabled drivers build config 00:03:26.062 net/pcap: not in enabled drivers build config 00:03:26.062 net/pfe: not in enabled drivers build config 00:03:26.062 net/qede: not in enabled drivers build config 00:03:26.062 net/ring: not in enabled drivers build config 00:03:26.062 net/sfc: not in enabled drivers build config 00:03:26.062 net/softnic: not in enabled drivers build config 00:03:26.062 net/tap: not in enabled drivers build config 00:03:26.062 net/thunderx: not in enabled drivers build config 00:03:26.062 net/txgbe: not in enabled drivers build config 00:03:26.062 net/vdev_netvsc: not in enabled drivers build config 00:03:26.062 net/vhost: not in enabled drivers build config 00:03:26.062 net/virtio: not in enabled drivers build config 00:03:26.062 net/vmxnet3: not in enabled drivers build config 00:03:26.062 raw/*: missing internal dependency, "rawdev" 00:03:26.062 crypto/armv8: not in enabled drivers build config 00:03:26.062 crypto/bcmfs: not in enabled drivers build config 00:03:26.062 crypto/caam_jr: not in enabled drivers build config 00:03:26.062 crypto/ccp: not in enabled drivers build config 00:03:26.062 crypto/cnxk: not in enabled drivers build config 00:03:26.062 crypto/dpaa_sec: not in enabled drivers build config 00:03:26.062 crypto/dpaa2_sec: not in enabled drivers build config 00:03:26.062 crypto/ipsec_mb: not in enabled drivers build config 00:03:26.062 crypto/mlx5: not in enabled drivers build config 00:03:26.062 crypto/mvsam: not in enabled drivers build config 00:03:26.062 crypto/nitrox: not in enabled drivers build config 00:03:26.062 crypto/null: not in enabled drivers build config 00:03:26.062 crypto/octeontx: not in enabled drivers build config 00:03:26.062 crypto/openssl: not in enabled drivers build config 00:03:26.062 crypto/scheduler: not in enabled drivers build config 00:03:26.062 crypto/uadk: not in enabled drivers build config 00:03:26.062 crypto/virtio: not in enabled drivers build config 00:03:26.062 compress/isal: not in enabled drivers build config 00:03:26.062 compress/mlx5: not in enabled drivers build config 00:03:26.062 compress/nitrox: not in enabled drivers build config 00:03:26.062 compress/octeontx: not in enabled drivers build config 00:03:26.062 compress/zlib: not in enabled drivers build config 00:03:26.062 regex/*: missing internal dependency, "regexdev" 00:03:26.062 ml/*: missing internal dependency, "mldev" 00:03:26.062 vdpa/ifc: not in enabled drivers build config 00:03:26.062 vdpa/mlx5: not in enabled drivers build config 00:03:26.062 vdpa/nfp: not in enabled drivers build config 00:03:26.062 vdpa/sfc: not in enabled drivers build config 00:03:26.062 event/*: missing internal dependency, "eventdev" 00:03:26.062 baseband/*: missing internal dependency, "bbdev" 00:03:26.062 gpu/*: missing internal dependency, "gpudev" 00:03:26.062 00:03:26.062 00:03:26.062 Build targets in project: 84 00:03:26.062 00:03:26.062 DPDK 24.03.0 00:03:26.062 00:03:26.062 User defined options 00:03:26.062 buildtype : debug 00:03:26.062 default_library : shared 00:03:26.062 libdir : lib 00:03:26.062 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:26.062 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:26.062 c_link_args : 00:03:26.062 cpu_instruction_set: native 00:03:26.062 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:26.062 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:26.062 enable_docs : false 00:03:26.062 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:26.062 enable_kmods : false 00:03:26.062 max_lcores : 128 00:03:26.062 tests : false 00:03:26.062 00:03:26.062 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:26.062 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:26.062 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:26.063 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:26.063 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:26.063 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:26.063 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:26.063 [6/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:26.063 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:26.063 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:26.063 [9/267] Linking static target lib/librte_kvargs.a 00:03:26.063 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:26.063 [11/267] Linking static target lib/librte_log.a 00:03:26.329 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:26.329 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:26.329 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:26.329 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:26.329 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:26.329 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:26.329 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:26.329 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:26.329 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:26.329 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:26.329 [22/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:26.329 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:26.329 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:26.329 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:26.329 [26/267] Linking static target lib/librte_pci.a 00:03:26.329 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:26.329 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:26.329 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:26.329 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:26.329 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:26.329 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:26.329 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:26.329 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:26.329 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:26.329 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:26.589 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:26.589 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:26.589 [39/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:26.589 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:26.589 [41/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:26.589 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.589 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:26.589 [44/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:26.589 [45/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.589 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:26.589 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:26.589 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:26.589 [49/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:26.589 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:26.589 [51/267] Linking static target lib/librte_telemetry.a 00:03:26.589 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:26.589 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:26.589 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:26.589 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:26.589 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:26.589 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:26.589 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:26.589 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:26.589 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:26.589 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:26.589 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:26.589 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:26.589 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:26.589 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:26.589 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:26.589 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:26.589 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:26.849 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:26.849 [70/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:26.849 [71/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:26.849 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:26.849 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:26.849 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:26.849 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:26.849 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:26.849 [77/267] Linking static target lib/librte_ring.a 00:03:26.849 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:26.849 [79/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:26.849 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:26.849 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:26.849 [82/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:26.849 [83/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:26.849 [84/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:26.849 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:26.849 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:26.850 [87/267] Linking static target lib/librte_timer.a 00:03:26.850 [88/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:26.850 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:26.850 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:26.850 [91/267] Linking static target lib/librte_meter.a 00:03:26.850 [92/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:26.850 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:26.850 [94/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:26.850 [95/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:26.850 [96/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:26.850 [97/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:26.850 [98/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:26.850 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:26.850 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:26.850 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:26.850 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:26.850 [103/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:26.850 [104/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:26.850 [105/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:26.850 [106/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:26.850 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:26.850 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:26.850 [109/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:26.850 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:26.850 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:26.850 [112/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:26.850 [113/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:26.850 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:26.850 [115/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:26.850 [116/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:26.850 [117/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:26.850 [118/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:26.850 [119/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:26.850 [120/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:26.850 [121/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:26.850 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:26.850 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:26.850 [124/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:26.850 [125/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.850 [126/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:26.850 [127/267] Linking static target lib/librte_security.a 00:03:26.850 [128/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:26.850 [129/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:26.850 [130/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:26.850 [131/267] Linking static target lib/librte_dmadev.a 00:03:26.850 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:26.850 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:26.850 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:26.850 [135/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:26.850 [136/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:26.850 [137/267] Linking target lib/librte_log.so.24.1 00:03:26.850 [138/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:26.850 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:26.850 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:26.850 [141/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:26.850 [142/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:26.850 [143/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:26.850 [144/267] Linking static target lib/librte_power.a 00:03:26.850 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:26.850 [146/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:26.850 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:26.850 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:26.850 [149/267] Linking static target lib/librte_mempool.a 00:03:26.850 [150/267] Linking static target lib/librte_cmdline.a 00:03:26.850 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:26.850 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:26.850 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:26.850 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.850 [155/267] Linking static target lib/librte_net.a 00:03:26.850 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:26.850 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:26.850 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:26.850 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:26.850 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:26.850 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:26.850 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.850 [163/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.850 [164/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.850 [165/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:26.850 [166/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:26.850 [167/267] Linking static target lib/librte_compressdev.a 00:03:26.850 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:26.850 [169/267] Linking static target lib/librte_rcu.a 00:03:26.850 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:26.850 [171/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.850 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:27.111 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:27.111 [174/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:27.111 [175/267] Linking static target drivers/librte_bus_vdev.a 00:03:27.111 [176/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:27.111 [177/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:27.111 [178/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:27.111 [179/267] Linking static target lib/librte_hash.a 00:03:27.111 [180/267] Linking static target lib/librte_reorder.a 00:03:27.111 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.111 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:27.111 [183/267] Linking static target lib/librte_eal.a 00:03:27.111 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:27.111 [185/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:27.111 [186/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.111 [187/267] Linking target lib/librte_kvargs.so.24.1 00:03:27.111 [188/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:27.111 [189/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:27.111 [190/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.111 [191/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.111 [192/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:27.111 [193/267] Linking static target drivers/librte_bus_pci.a 00:03:27.111 [194/267] Linking static target lib/librte_mbuf.a 00:03:27.111 [195/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:27.111 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:27.111 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:27.111 [198/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:27.111 [199/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.111 [200/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.111 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:27.111 [202/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.373 [203/267] Linking target lib/librte_telemetry.so.24.1 00:03:27.373 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:27.373 [205/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:27.373 [206/267] Linking static target lib/librte_cryptodev.a 00:03:27.373 [207/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:27.373 [208/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.373 [209/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.373 [210/267] Linking static target drivers/librte_mempool_ring.a 00:03:27.373 [211/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.373 [212/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.373 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:27.373 [214/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.634 [215/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.634 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.634 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:27.634 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:27.634 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.634 [220/267] Linking static target lib/librte_ethdev.a 00:03:27.895 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.895 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.895 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.895 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.895 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.156 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.729 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.729 [228/267] Linking static target lib/librte_vhost.a 00:03:29.673 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.135 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.727 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.671 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.932 [233/267] Linking target lib/librte_eal.so.24.1 00:03:38.932 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:38.932 [235/267] Linking target lib/librte_ring.so.24.1 00:03:38.932 [236/267] Linking target lib/librte_meter.so.24.1 00:03:38.932 [237/267] Linking target lib/librte_pci.so.24.1 00:03:38.932 [238/267] Linking target lib/librte_timer.so.24.1 00:03:38.932 [239/267] Linking target lib/librte_dmadev.so.24.1 00:03:38.932 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:39.193 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:39.193 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:39.193 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:39.193 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:39.193 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:39.193 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:39.193 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:39.193 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:39.454 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:39.454 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:39.454 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:39.454 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:39.715 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:39.715 [254/267] Linking target lib/librte_net.so.24.1 00:03:39.715 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:39.715 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:39.715 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:39.715 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:39.715 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:39.977 [260/267] Linking target lib/librte_security.so.24.1 00:03:39.977 [261/267] Linking target lib/librte_hash.so.24.1 00:03:39.977 [262/267] Linking target lib/librte_cmdline.so.24.1 00:03:39.977 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:39.977 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:39.977 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:39.977 [266/267] Linking target lib/librte_power.so.24.1 00:03:40.239 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:40.239 INFO: autodetecting backend as ninja 00:03:40.239 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:41.182 CC lib/ut/ut.o 00:03:41.182 CC lib/log/log.o 00:03:41.182 CC lib/log/log_deprecated.o 00:03:41.182 CC lib/log/log_flags.o 00:03:41.182 CC lib/ut_mock/mock.o 00:03:41.444 LIB libspdk_ut.a 00:03:41.444 LIB libspdk_log.a 00:03:41.444 LIB libspdk_ut_mock.a 00:03:41.444 SO libspdk_ut.so.2.0 00:03:41.444 SO libspdk_log.so.7.0 00:03:41.444 SO libspdk_ut_mock.so.6.0 00:03:41.444 SYMLINK libspdk_ut.so 00:03:41.444 SYMLINK libspdk_log.so 00:03:41.444 SYMLINK libspdk_ut_mock.so 00:03:41.706 CC lib/util/base64.o 00:03:41.706 CC lib/util/bit_array.o 00:03:41.706 CC lib/util/cpuset.o 00:03:41.706 CC lib/util/crc16.o 00:03:41.706 CC lib/util/crc32.o 00:03:41.706 CC lib/util/crc32c.o 00:03:41.706 CC lib/util/crc32_ieee.o 00:03:41.706 CC lib/util/crc64.o 00:03:41.706 CC lib/util/dif.o 00:03:41.706 CC lib/util/fd.o 00:03:41.706 CC lib/util/fd_group.o 00:03:41.706 CC lib/dma/dma.o 00:03:41.706 CC lib/util/file.o 00:03:41.706 CC lib/util/hexlify.o 00:03:41.706 CC lib/util/math.o 00:03:41.706 CXX lib/trace_parser/trace.o 00:03:41.706 CC lib/util/iov.o 00:03:41.706 CC lib/util/net.o 00:03:41.706 CC lib/ioat/ioat.o 00:03:41.706 CC lib/util/pipe.o 00:03:41.706 CC lib/util/strerror_tls.o 00:03:41.706 CC lib/util/string.o 00:03:41.706 CC lib/util/uuid.o 00:03:41.706 CC lib/util/xor.o 00:03:41.706 CC lib/util/zipf.o 00:03:41.968 CC lib/vfio_user/host/vfio_user_pci.o 00:03:41.968 CC lib/vfio_user/host/vfio_user.o 00:03:41.968 LIB libspdk_dma.a 00:03:41.968 SO libspdk_dma.so.4.0 00:03:42.230 LIB libspdk_ioat.a 00:03:42.230 SYMLINK libspdk_dma.so 00:03:42.230 SO libspdk_ioat.so.7.0 00:03:42.230 SYMLINK libspdk_ioat.so 00:03:42.230 LIB libspdk_vfio_user.a 00:03:42.230 SO libspdk_vfio_user.so.5.0 00:03:42.230 LIB libspdk_util.a 00:03:42.492 SYMLINK libspdk_vfio_user.so 00:03:42.492 SO libspdk_util.so.10.0 00:03:42.492 SYMLINK libspdk_util.so 00:03:42.754 LIB libspdk_trace_parser.a 00:03:42.754 SO libspdk_trace_parser.so.5.0 00:03:42.754 SYMLINK libspdk_trace_parser.so 00:03:43.015 CC lib/idxd/idxd.o 00:03:43.015 CC lib/conf/conf.o 00:03:43.015 CC lib/idxd/idxd_user.o 00:03:43.015 CC lib/idxd/idxd_kernel.o 00:03:43.015 CC lib/vmd/vmd.o 00:03:43.015 CC lib/vmd/led.o 00:03:43.015 CC lib/rdma_provider/common.o 00:03:43.015 CC lib/env_dpdk/env.o 00:03:43.015 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:43.015 CC lib/env_dpdk/memory.o 00:03:43.015 CC lib/env_dpdk/pci.o 00:03:43.015 CC lib/env_dpdk/threads.o 00:03:43.015 CC lib/env_dpdk/init.o 00:03:43.015 CC lib/json/json_parse.o 00:03:43.015 CC lib/rdma_utils/rdma_utils.o 00:03:43.015 CC lib/json/json_util.o 00:03:43.015 CC lib/json/json_write.o 00:03:43.015 CC lib/env_dpdk/pci_ioat.o 00:03:43.015 CC lib/env_dpdk/pci_virtio.o 00:03:43.015 CC lib/env_dpdk/pci_vmd.o 00:03:43.015 CC lib/env_dpdk/pci_idxd.o 00:03:43.015 CC lib/env_dpdk/pci_event.o 00:03:43.015 CC lib/env_dpdk/sigbus_handler.o 00:03:43.015 CC lib/env_dpdk/pci_dpdk.o 00:03:43.015 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.015 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.015 LIB libspdk_rdma_provider.a 00:03:43.015 LIB libspdk_conf.a 00:03:43.277 SO libspdk_conf.so.6.0 00:03:43.277 SO libspdk_rdma_provider.so.6.0 00:03:43.277 LIB libspdk_rdma_utils.a 00:03:43.277 LIB libspdk_json.a 00:03:43.277 SYMLINK libspdk_rdma_provider.so 00:03:43.277 SO libspdk_rdma_utils.so.1.0 00:03:43.277 SYMLINK libspdk_conf.so 00:03:43.277 SO libspdk_json.so.6.0 00:03:43.277 SYMLINK libspdk_rdma_utils.so 00:03:43.277 SYMLINK libspdk_json.so 00:03:43.277 LIB libspdk_idxd.a 00:03:43.540 SO libspdk_idxd.so.12.0 00:03:43.540 LIB libspdk_vmd.a 00:03:43.540 SO libspdk_vmd.so.6.0 00:03:43.540 SYMLINK libspdk_idxd.so 00:03:43.540 SYMLINK libspdk_vmd.so 00:03:43.802 CC lib/jsonrpc/jsonrpc_server.o 00:03:43.802 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:43.802 CC lib/jsonrpc/jsonrpc_client.o 00:03:43.802 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:44.064 LIB libspdk_jsonrpc.a 00:03:44.064 SO libspdk_jsonrpc.so.6.0 00:03:44.064 SYMLINK libspdk_jsonrpc.so 00:03:44.064 LIB libspdk_env_dpdk.a 00:03:44.064 SO libspdk_env_dpdk.so.15.0 00:03:44.325 SYMLINK libspdk_env_dpdk.so 00:03:44.325 CC lib/rpc/rpc.o 00:03:44.586 LIB libspdk_rpc.a 00:03:44.586 SO libspdk_rpc.so.6.0 00:03:44.848 SYMLINK libspdk_rpc.so 00:03:45.109 CC lib/trace/trace.o 00:03:45.109 CC lib/trace/trace_flags.o 00:03:45.109 CC lib/trace/trace_rpc.o 00:03:45.109 CC lib/notify/notify.o 00:03:45.109 CC lib/notify/notify_rpc.o 00:03:45.109 CC lib/keyring/keyring.o 00:03:45.109 CC lib/keyring/keyring_rpc.o 00:03:45.371 LIB libspdk_notify.a 00:03:45.371 SO libspdk_notify.so.6.0 00:03:45.371 LIB libspdk_trace.a 00:03:45.371 LIB libspdk_keyring.a 00:03:45.371 SO libspdk_keyring.so.1.0 00:03:45.371 SO libspdk_trace.so.10.0 00:03:45.371 SYMLINK libspdk_notify.so 00:03:45.371 SYMLINK libspdk_keyring.so 00:03:45.371 SYMLINK libspdk_trace.so 00:03:45.945 CC lib/thread/thread.o 00:03:45.945 CC lib/thread/iobuf.o 00:03:45.945 CC lib/sock/sock.o 00:03:45.945 CC lib/sock/sock_rpc.o 00:03:46.205 LIB libspdk_sock.a 00:03:46.205 SO libspdk_sock.so.10.0 00:03:46.205 SYMLINK libspdk_sock.so 00:03:46.776 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:46.776 CC lib/nvme/nvme_ctrlr.o 00:03:46.776 CC lib/nvme/nvme_fabric.o 00:03:46.776 CC lib/nvme/nvme_ns_cmd.o 00:03:46.776 CC lib/nvme/nvme_ns.o 00:03:46.776 CC lib/nvme/nvme_pcie_common.o 00:03:46.776 CC lib/nvme/nvme_pcie.o 00:03:46.776 CC lib/nvme/nvme_qpair.o 00:03:46.776 CC lib/nvme/nvme.o 00:03:46.776 CC lib/nvme/nvme_quirks.o 00:03:46.776 CC lib/nvme/nvme_transport.o 00:03:46.776 CC lib/nvme/nvme_discovery.o 00:03:46.777 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:46.777 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:46.777 CC lib/nvme/nvme_tcp.o 00:03:46.777 CC lib/nvme/nvme_opal.o 00:03:46.777 CC lib/nvme/nvme_io_msg.o 00:03:46.777 CC lib/nvme/nvme_poll_group.o 00:03:46.777 CC lib/nvme/nvme_zns.o 00:03:46.777 CC lib/nvme/nvme_auth.o 00:03:46.777 CC lib/nvme/nvme_cuse.o 00:03:46.777 CC lib/nvme/nvme_stubs.o 00:03:46.777 CC lib/nvme/nvme_vfio_user.o 00:03:46.777 CC lib/nvme/nvme_rdma.o 00:03:47.036 LIB libspdk_thread.a 00:03:47.036 SO libspdk_thread.so.10.1 00:03:47.297 SYMLINK libspdk_thread.so 00:03:47.559 CC lib/blob/blobstore.o 00:03:47.559 CC lib/init/subsystem.o 00:03:47.559 CC lib/init/json_config.o 00:03:47.559 CC lib/blob/request.o 00:03:47.559 CC lib/blob/zeroes.o 00:03:47.559 CC lib/blob/blob_bs_dev.o 00:03:47.559 CC lib/init/subsystem_rpc.o 00:03:47.559 CC lib/init/rpc.o 00:03:47.559 CC lib/accel/accel.o 00:03:47.559 CC lib/accel/accel_sw.o 00:03:47.559 CC lib/accel/accel_rpc.o 00:03:47.559 CC lib/virtio/virtio.o 00:03:47.559 CC lib/virtio/virtio_vhost_user.o 00:03:47.559 CC lib/virtio/virtio_vfio_user.o 00:03:47.559 CC lib/vfu_tgt/tgt_endpoint.o 00:03:47.559 CC lib/virtio/virtio_pci.o 00:03:47.559 CC lib/vfu_tgt/tgt_rpc.o 00:03:47.821 LIB libspdk_init.a 00:03:47.821 SO libspdk_init.so.5.0 00:03:47.821 LIB libspdk_vfu_tgt.a 00:03:47.821 LIB libspdk_virtio.a 00:03:47.821 SO libspdk_vfu_tgt.so.3.0 00:03:47.821 SYMLINK libspdk_init.so 00:03:48.083 SO libspdk_virtio.so.7.0 00:03:48.083 SYMLINK libspdk_vfu_tgt.so 00:03:48.083 SYMLINK libspdk_virtio.so 00:03:48.352 CC lib/event/app.o 00:03:48.352 CC lib/event/reactor.o 00:03:48.352 CC lib/event/log_rpc.o 00:03:48.352 CC lib/event/app_rpc.o 00:03:48.352 CC lib/event/scheduler_static.o 00:03:48.352 LIB libspdk_accel.a 00:03:48.662 SO libspdk_accel.so.16.0 00:03:48.662 LIB libspdk_nvme.a 00:03:48.662 SYMLINK libspdk_accel.so 00:03:48.662 SO libspdk_nvme.so.13.1 00:03:48.662 LIB libspdk_event.a 00:03:48.662 SO libspdk_event.so.14.0 00:03:48.924 SYMLINK libspdk_event.so 00:03:48.924 CC lib/bdev/bdev.o 00:03:48.924 CC lib/bdev/bdev_rpc.o 00:03:48.924 CC lib/bdev/bdev_zone.o 00:03:48.924 CC lib/bdev/part.o 00:03:48.924 CC lib/bdev/scsi_nvme.o 00:03:48.924 SYMLINK libspdk_nvme.so 00:03:50.311 LIB libspdk_blob.a 00:03:50.311 SO libspdk_blob.so.11.0 00:03:50.311 SYMLINK libspdk_blob.so 00:03:50.573 CC lib/lvol/lvol.o 00:03:50.573 CC lib/blobfs/blobfs.o 00:03:50.573 CC lib/blobfs/tree.o 00:03:51.148 LIB libspdk_bdev.a 00:03:51.148 SO libspdk_bdev.so.16.0 00:03:51.409 SYMLINK libspdk_bdev.so 00:03:51.409 LIB libspdk_blobfs.a 00:03:51.409 SO libspdk_blobfs.so.10.0 00:03:51.409 LIB libspdk_lvol.a 00:03:51.409 SYMLINK libspdk_blobfs.so 00:03:51.409 SO libspdk_lvol.so.10.0 00:03:51.670 SYMLINK libspdk_lvol.so 00:03:51.670 CC lib/nvmf/ctrlr.o 00:03:51.670 CC lib/nvmf/ctrlr_discovery.o 00:03:51.670 CC lib/nvmf/ctrlr_bdev.o 00:03:51.670 CC lib/nvmf/subsystem.o 00:03:51.670 CC lib/nvmf/nvmf.o 00:03:51.670 CC lib/nvmf/nvmf_rpc.o 00:03:51.670 CC lib/nvmf/tcp.o 00:03:51.670 CC lib/nvmf/transport.o 00:03:51.670 CC lib/nvmf/stubs.o 00:03:51.670 CC lib/nvmf/mdns_server.o 00:03:51.670 CC lib/nvmf/vfio_user.o 00:03:51.670 CC lib/nvmf/rdma.o 00:03:51.670 CC lib/nvmf/auth.o 00:03:51.670 CC lib/nbd/nbd.o 00:03:51.670 CC lib/nbd/nbd_rpc.o 00:03:51.670 CC lib/scsi/dev.o 00:03:51.670 CC lib/ftl/ftl_core.o 00:03:51.670 CC lib/scsi/lun.o 00:03:51.670 CC lib/ftl/ftl_init.o 00:03:51.670 CC lib/scsi/port.o 00:03:51.670 CC lib/ublk/ublk.o 00:03:51.670 CC lib/ftl/ftl_layout.o 00:03:51.670 CC lib/scsi/scsi.o 00:03:51.670 CC lib/ftl/ftl_debug.o 00:03:51.670 CC lib/ublk/ublk_rpc.o 00:03:51.670 CC lib/scsi/scsi_bdev.o 00:03:51.670 CC lib/ftl/ftl_io.o 00:03:51.670 CC lib/ftl/ftl_sb.o 00:03:51.670 CC lib/scsi/scsi_pr.o 00:03:51.670 CC lib/scsi/scsi_rpc.o 00:03:51.670 CC lib/ftl/ftl_l2p.o 00:03:51.670 CC lib/ftl/ftl_l2p_flat.o 00:03:51.670 CC lib/scsi/task.o 00:03:51.670 CC lib/ftl/ftl_nv_cache.o 00:03:51.670 CC lib/ftl/ftl_band.o 00:03:51.670 CC lib/ftl/ftl_band_ops.o 00:03:51.670 CC lib/ftl/ftl_writer.o 00:03:51.670 CC lib/ftl/ftl_rq.o 00:03:51.670 CC lib/ftl/ftl_reloc.o 00:03:51.670 CC lib/ftl/ftl_l2p_cache.o 00:03:51.670 CC lib/ftl/ftl_p2l.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.670 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.670 CC lib/ftl/utils/ftl_conf.o 00:03:51.670 CC lib/ftl/utils/ftl_md.o 00:03:51.670 CC lib/ftl/utils/ftl_mempool.o 00:03:51.670 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.670 CC lib/ftl/utils/ftl_property.o 00:03:51.670 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.670 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.670 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.670 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.670 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.670 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.670 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.670 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.670 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.670 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.670 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:51.670 CC lib/ftl/base/ftl_base_dev.o 00:03:51.670 CC lib/ftl/ftl_trace.o 00:03:51.670 CC lib/ftl/base/ftl_base_bdev.o 00:03:52.240 LIB libspdk_nbd.a 00:03:52.240 SO libspdk_nbd.so.7.0 00:03:52.240 LIB libspdk_scsi.a 00:03:52.240 SYMLINK libspdk_nbd.so 00:03:52.240 SO libspdk_scsi.so.9.0 00:03:52.240 LIB libspdk_ublk.a 00:03:52.501 SYMLINK libspdk_scsi.so 00:03:52.501 SO libspdk_ublk.so.3.0 00:03:52.501 SYMLINK libspdk_ublk.so 00:03:52.762 LIB libspdk_ftl.a 00:03:52.762 CC lib/vhost/vhost.o 00:03:52.762 CC lib/vhost/vhost_rpc.o 00:03:52.762 CC lib/vhost/vhost_scsi.o 00:03:52.762 CC lib/vhost/vhost_blk.o 00:03:52.762 CC lib/vhost/rte_vhost_user.o 00:03:52.762 CC lib/iscsi/conn.o 00:03:52.762 CC lib/iscsi/init_grp.o 00:03:52.762 CC lib/iscsi/param.o 00:03:52.762 CC lib/iscsi/iscsi.o 00:03:52.762 CC lib/iscsi/md5.o 00:03:52.762 CC lib/iscsi/portal_grp.o 00:03:52.762 CC lib/iscsi/tgt_node.o 00:03:52.762 CC lib/iscsi/iscsi_subsystem.o 00:03:52.762 CC lib/iscsi/iscsi_rpc.o 00:03:52.762 CC lib/iscsi/task.o 00:03:52.762 SO libspdk_ftl.so.9.0 00:03:53.335 SYMLINK libspdk_ftl.so 00:03:53.598 LIB libspdk_nvmf.a 00:03:53.598 SO libspdk_nvmf.so.19.0 00:03:53.598 LIB libspdk_vhost.a 00:03:53.859 SO libspdk_vhost.so.8.0 00:03:53.859 SYMLINK libspdk_nvmf.so 00:03:53.859 SYMLINK libspdk_vhost.so 00:03:53.859 LIB libspdk_iscsi.a 00:03:54.120 SO libspdk_iscsi.so.8.0 00:03:54.120 SYMLINK libspdk_iscsi.so 00:03:54.693 CC module/vfu_device/vfu_virtio.o 00:03:54.693 CC module/vfu_device/vfu_virtio_blk.o 00:03:54.693 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.693 CC module/vfu_device/vfu_virtio_scsi.o 00:03:54.693 CC module/vfu_device/vfu_virtio_rpc.o 00:03:54.954 CC module/accel/error/accel_error.o 00:03:54.954 CC module/accel/error/accel_error_rpc.o 00:03:54.954 CC module/sock/posix/posix.o 00:03:54.954 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.954 LIB libspdk_env_dpdk_rpc.a 00:03:54.954 CC module/keyring/linux/keyring.o 00:03:54.954 CC module/keyring/linux/keyring_rpc.o 00:03:54.954 CC module/accel/ioat/accel_ioat.o 00:03:54.954 CC module/blob/bdev/blob_bdev.o 00:03:54.954 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.954 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.954 CC module/accel/iaa/accel_iaa.o 00:03:54.954 CC module/keyring/file/keyring.o 00:03:54.954 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.954 CC module/accel/dsa/accel_dsa.o 00:03:54.954 CC module/keyring/file/keyring_rpc.o 00:03:54.954 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.954 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.954 SO libspdk_env_dpdk_rpc.so.6.0 00:03:54.954 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.954 LIB libspdk_scheduler_gscheduler.a 00:03:55.215 LIB libspdk_accel_error.a 00:03:55.215 LIB libspdk_keyring_linux.a 00:03:55.215 LIB libspdk_keyring_file.a 00:03:55.215 SO libspdk_accel_error.so.2.0 00:03:55.215 SO libspdk_scheduler_gscheduler.so.4.0 00:03:55.215 LIB libspdk_scheduler_dpdk_governor.a 00:03:55.215 LIB libspdk_scheduler_dynamic.a 00:03:55.215 SO libspdk_keyring_linux.so.1.0 00:03:55.215 LIB libspdk_accel_iaa.a 00:03:55.215 LIB libspdk_accel_ioat.a 00:03:55.215 SO libspdk_keyring_file.so.1.0 00:03:55.215 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:55.215 SO libspdk_accel_ioat.so.6.0 00:03:55.215 SO libspdk_scheduler_dynamic.so.4.0 00:03:55.215 SYMLINK libspdk_scheduler_gscheduler.so 00:03:55.215 SYMLINK libspdk_accel_error.so 00:03:55.215 SO libspdk_accel_iaa.so.3.0 00:03:55.215 LIB libspdk_accel_dsa.a 00:03:55.215 LIB libspdk_blob_bdev.a 00:03:55.215 SYMLINK libspdk_keyring_linux.so 00:03:55.215 SYMLINK libspdk_keyring_file.so 00:03:55.215 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:55.215 SO libspdk_blob_bdev.so.11.0 00:03:55.215 SO libspdk_accel_dsa.so.5.0 00:03:55.215 SYMLINK libspdk_scheduler_dynamic.so 00:03:55.215 SYMLINK libspdk_accel_ioat.so 00:03:55.215 SYMLINK libspdk_accel_iaa.so 00:03:55.215 SYMLINK libspdk_blob_bdev.so 00:03:55.215 SYMLINK libspdk_accel_dsa.so 00:03:55.215 LIB libspdk_vfu_device.a 00:03:55.477 SO libspdk_vfu_device.so.3.0 00:03:55.477 SYMLINK libspdk_vfu_device.so 00:03:55.477 LIB libspdk_sock_posix.a 00:03:55.739 SO libspdk_sock_posix.so.6.0 00:03:55.739 SYMLINK libspdk_sock_posix.so 00:03:55.739 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.739 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.739 CC module/bdev/raid/bdev_raid.o 00:03:55.739 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.739 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.000 CC module/bdev/raid/raid0.o 00:03:56.000 CC module/bdev/delay/vbdev_delay.o 00:03:56.000 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:56.000 CC module/bdev/raid/concat.o 00:03:56.000 CC module/bdev/raid/raid1.o 00:03:56.000 CC module/bdev/null/bdev_null.o 00:03:56.000 CC module/bdev/malloc/bdev_malloc.o 00:03:56.000 CC module/bdev/null/bdev_null_rpc.o 00:03:56.000 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:56.000 CC module/bdev/lvol/vbdev_lvol.o 00:03:56.000 CC module/bdev/ftl/bdev_ftl.o 00:03:56.000 CC module/bdev/error/vbdev_error.o 00:03:56.000 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.000 CC module/bdev/error/vbdev_error_rpc.o 00:03:56.000 CC module/bdev/gpt/gpt.o 00:03:56.000 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:56.000 CC module/bdev/gpt/vbdev_gpt.o 00:03:56.000 CC module/bdev/nvme/bdev_nvme.o 00:03:56.000 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.000 CC module/bdev/split/vbdev_split.o 00:03:56.000 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.000 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:56.000 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.000 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.000 CC module/bdev/nvme/nvme_rpc.o 00:03:56.000 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.000 CC module/bdev/nvme/vbdev_opal.o 00:03:56.000 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.000 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.000 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.000 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.000 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.000 CC module/bdev/aio/bdev_aio.o 00:03:56.000 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.000 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.000 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.000 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.000 LIB libspdk_blobfs_bdev.a 00:03:56.267 SO libspdk_blobfs_bdev.so.6.0 00:03:56.267 LIB libspdk_bdev_split.a 00:03:56.267 LIB libspdk_bdev_error.a 00:03:56.267 LIB libspdk_bdev_gpt.a 00:03:56.267 LIB libspdk_bdev_null.a 00:03:56.267 SO libspdk_bdev_split.so.6.0 00:03:56.267 SYMLINK libspdk_blobfs_bdev.so 00:03:56.267 LIB libspdk_bdev_ftl.a 00:03:56.267 LIB libspdk_bdev_malloc.a 00:03:56.267 SO libspdk_bdev_gpt.so.6.0 00:03:56.267 SO libspdk_bdev_error.so.6.0 00:03:56.267 SO libspdk_bdev_null.so.6.0 00:03:56.267 LIB libspdk_bdev_passthru.a 00:03:56.267 LIB libspdk_bdev_zone_block.a 00:03:56.267 SO libspdk_bdev_malloc.so.6.0 00:03:56.267 SO libspdk_bdev_ftl.so.6.0 00:03:56.267 LIB libspdk_bdev_delay.a 00:03:56.267 LIB libspdk_bdev_aio.a 00:03:56.267 SYMLINK libspdk_bdev_split.so 00:03:56.267 SO libspdk_bdev_passthru.so.6.0 00:03:56.267 SYMLINK libspdk_bdev_gpt.so 00:03:56.267 SO libspdk_bdev_zone_block.so.6.0 00:03:56.267 LIB libspdk_bdev_iscsi.a 00:03:56.267 SYMLINK libspdk_bdev_error.so 00:03:56.267 SYMLINK libspdk_bdev_null.so 00:03:56.267 SYMLINK libspdk_bdev_ftl.so 00:03:56.267 SO libspdk_bdev_delay.so.6.0 00:03:56.267 SO libspdk_bdev_aio.so.6.0 00:03:56.267 SYMLINK libspdk_bdev_malloc.so 00:03:56.267 SO libspdk_bdev_iscsi.so.6.0 00:03:56.267 SYMLINK libspdk_bdev_zone_block.so 00:03:56.267 SYMLINK libspdk_bdev_passthru.so 00:03:56.267 LIB libspdk_bdev_lvol.a 00:03:56.267 SYMLINK libspdk_bdev_delay.so 00:03:56.267 SYMLINK libspdk_bdev_aio.so 00:03:56.528 SYMLINK libspdk_bdev_iscsi.so 00:03:56.528 LIB libspdk_bdev_virtio.a 00:03:56.528 SO libspdk_bdev_lvol.so.6.0 00:03:56.528 SO libspdk_bdev_virtio.so.6.0 00:03:56.528 SYMLINK libspdk_bdev_lvol.so 00:03:56.528 SYMLINK libspdk_bdev_virtio.so 00:03:56.789 LIB libspdk_bdev_raid.a 00:03:56.789 SO libspdk_bdev_raid.so.6.0 00:03:56.789 SYMLINK libspdk_bdev_raid.so 00:03:57.734 LIB libspdk_bdev_nvme.a 00:03:57.734 SO libspdk_bdev_nvme.so.7.0 00:03:57.995 SYMLINK libspdk_bdev_nvme.so 00:03:58.569 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:58.569 CC module/event/subsystems/vmd/vmd.o 00:03:58.569 CC module/event/subsystems/scheduler/scheduler.o 00:03:58.569 CC module/event/subsystems/iobuf/iobuf.o 00:03:58.569 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:58.569 CC module/event/subsystems/sock/sock.o 00:03:58.569 CC module/event/subsystems/keyring/keyring.o 00:03:58.569 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:58.569 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:58.832 LIB libspdk_event_vmd.a 00:03:58.832 LIB libspdk_event_keyring.a 00:03:58.832 LIB libspdk_event_scheduler.a 00:03:58.832 LIB libspdk_event_vhost_blk.a 00:03:58.832 LIB libspdk_event_sock.a 00:03:58.832 LIB libspdk_event_vfu_tgt.a 00:03:58.832 LIB libspdk_event_iobuf.a 00:03:58.832 SO libspdk_event_vmd.so.6.0 00:03:58.832 SO libspdk_event_keyring.so.1.0 00:03:58.832 SO libspdk_event_scheduler.so.4.0 00:03:58.832 SO libspdk_event_sock.so.5.0 00:03:58.832 SO libspdk_event_vfu_tgt.so.3.0 00:03:58.832 SO libspdk_event_vhost_blk.so.3.0 00:03:58.832 SO libspdk_event_iobuf.so.3.0 00:03:58.832 SYMLINK libspdk_event_scheduler.so 00:03:58.832 SYMLINK libspdk_event_keyring.so 00:03:58.832 SYMLINK libspdk_event_vmd.so 00:03:58.832 SYMLINK libspdk_event_vfu_tgt.so 00:03:58.832 SYMLINK libspdk_event_sock.so 00:03:58.832 SYMLINK libspdk_event_vhost_blk.so 00:03:58.832 SYMLINK libspdk_event_iobuf.so 00:03:59.406 CC module/event/subsystems/accel/accel.o 00:03:59.406 LIB libspdk_event_accel.a 00:03:59.406 SO libspdk_event_accel.so.6.0 00:03:59.666 SYMLINK libspdk_event_accel.so 00:03:59.927 CC module/event/subsystems/bdev/bdev.o 00:04:00.188 LIB libspdk_event_bdev.a 00:04:00.188 SO libspdk_event_bdev.so.6.0 00:04:00.188 SYMLINK libspdk_event_bdev.so 00:04:00.448 CC module/event/subsystems/scsi/scsi.o 00:04:00.448 CC module/event/subsystems/ublk/ublk.o 00:04:00.448 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:00.448 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:00.448 CC module/event/subsystems/nbd/nbd.o 00:04:00.708 LIB libspdk_event_ublk.a 00:04:00.708 LIB libspdk_event_nbd.a 00:04:00.708 LIB libspdk_event_scsi.a 00:04:00.708 SO libspdk_event_ublk.so.3.0 00:04:00.708 SO libspdk_event_nbd.so.6.0 00:04:00.708 SO libspdk_event_scsi.so.6.0 00:04:00.708 LIB libspdk_event_nvmf.a 00:04:00.708 SYMLINK libspdk_event_nbd.so 00:04:00.708 SO libspdk_event_nvmf.so.6.0 00:04:00.708 SYMLINK libspdk_event_ublk.so 00:04:00.708 SYMLINK libspdk_event_scsi.so 00:04:00.969 SYMLINK libspdk_event_nvmf.so 00:04:01.230 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.230 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.230 LIB libspdk_event_vhost_scsi.a 00:04:01.230 LIB libspdk_event_iscsi.a 00:04:01.230 SO libspdk_event_vhost_scsi.so.3.0 00:04:01.491 SO libspdk_event_iscsi.so.6.0 00:04:01.491 SYMLINK libspdk_event_vhost_scsi.so 00:04:01.491 SYMLINK libspdk_event_iscsi.so 00:04:01.752 SO libspdk.so.6.0 00:04:01.752 SYMLINK libspdk.so 00:04:02.014 CC app/trace_record/trace_record.o 00:04:02.014 CC test/rpc_client/rpc_client_test.o 00:04:02.014 CXX app/trace/trace.o 00:04:02.014 TEST_HEADER include/spdk/accel.h 00:04:02.014 TEST_HEADER include/spdk/barrier.h 00:04:02.014 TEST_HEADER include/spdk/accel_module.h 00:04:02.014 TEST_HEADER include/spdk/assert.h 00:04:02.014 TEST_HEADER include/spdk/base64.h 00:04:02.014 TEST_HEADER include/spdk/bdev.h 00:04:02.014 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.014 TEST_HEADER include/spdk/bdev_module.h 00:04:02.014 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.014 CC app/spdk_lspci/spdk_lspci.o 00:04:02.014 TEST_HEADER include/spdk/bit_array.h 00:04:02.014 TEST_HEADER include/spdk/bit_pool.h 00:04:02.014 CC app/spdk_nvme_perf/perf.o 00:04:02.014 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.014 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.014 TEST_HEADER include/spdk/blobfs.h 00:04:02.014 TEST_HEADER include/spdk/blob.h 00:04:02.014 TEST_HEADER include/spdk/conf.h 00:04:02.014 TEST_HEADER include/spdk/config.h 00:04:02.014 CC app/spdk_top/spdk_top.o 00:04:02.014 TEST_HEADER include/spdk/cpuset.h 00:04:02.014 CC app/spdk_nvme_identify/identify.o 00:04:02.014 TEST_HEADER include/spdk/crc16.h 00:04:02.014 TEST_HEADER include/spdk/crc32.h 00:04:02.014 TEST_HEADER include/spdk/crc64.h 00:04:02.014 TEST_HEADER include/spdk/dif.h 00:04:02.014 TEST_HEADER include/spdk/dma.h 00:04:02.014 TEST_HEADER include/spdk/endian.h 00:04:02.014 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.014 TEST_HEADER include/spdk/env.h 00:04:02.014 TEST_HEADER include/spdk/event.h 00:04:02.014 TEST_HEADER include/spdk/fd_group.h 00:04:02.014 TEST_HEADER include/spdk/fd.h 00:04:02.014 TEST_HEADER include/spdk/file.h 00:04:02.014 TEST_HEADER include/spdk/ftl.h 00:04:02.014 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.014 TEST_HEADER include/spdk/hexlify.h 00:04:02.014 TEST_HEADER include/spdk/histogram_data.h 00:04:02.014 TEST_HEADER include/spdk/idxd.h 00:04:02.014 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.014 TEST_HEADER include/spdk/init.h 00:04:02.014 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.014 TEST_HEADER include/spdk/ioat.h 00:04:02.014 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.014 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.014 TEST_HEADER include/spdk/keyring.h 00:04:02.014 TEST_HEADER include/spdk/json.h 00:04:02.014 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.014 TEST_HEADER include/spdk/keyring_module.h 00:04:02.014 TEST_HEADER include/spdk/likely.h 00:04:02.014 CC app/spdk_dd/spdk_dd.o 00:04:02.014 TEST_HEADER include/spdk/log.h 00:04:02.014 TEST_HEADER include/spdk/lvol.h 00:04:02.014 TEST_HEADER include/spdk/mmio.h 00:04:02.014 CC app/nvmf_tgt/nvmf_main.o 00:04:02.014 TEST_HEADER include/spdk/memory.h 00:04:02.014 TEST_HEADER include/spdk/notify.h 00:04:02.014 TEST_HEADER include/spdk/nbd.h 00:04:02.014 TEST_HEADER include/spdk/net.h 00:04:02.014 TEST_HEADER include/spdk/nvme.h 00:04:02.014 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.014 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.014 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.014 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.014 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.014 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.014 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.015 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.015 TEST_HEADER include/spdk/nvmf.h 00:04:02.015 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.015 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.015 TEST_HEADER include/spdk/opal.h 00:04:02.015 TEST_HEADER include/spdk/opal_spec.h 00:04:02.015 TEST_HEADER include/spdk/pci_ids.h 00:04:02.015 TEST_HEADER include/spdk/pipe.h 00:04:02.015 CC app/spdk_tgt/spdk_tgt.o 00:04:02.015 TEST_HEADER include/spdk/queue.h 00:04:02.015 TEST_HEADER include/spdk/reduce.h 00:04:02.015 TEST_HEADER include/spdk/rpc.h 00:04:02.015 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.015 TEST_HEADER include/spdk/scheduler.h 00:04:02.015 TEST_HEADER include/spdk/sock.h 00:04:02.015 TEST_HEADER include/spdk/scsi.h 00:04:02.015 TEST_HEADER include/spdk/stdinc.h 00:04:02.015 TEST_HEADER include/spdk/thread.h 00:04:02.015 TEST_HEADER include/spdk/string.h 00:04:02.015 TEST_HEADER include/spdk/trace.h 00:04:02.015 TEST_HEADER include/spdk/trace_parser.h 00:04:02.015 TEST_HEADER include/spdk/tree.h 00:04:02.015 TEST_HEADER include/spdk/ublk.h 00:04:02.015 TEST_HEADER include/spdk/uuid.h 00:04:02.015 TEST_HEADER include/spdk/util.h 00:04:02.015 TEST_HEADER include/spdk/version.h 00:04:02.015 TEST_HEADER include/spdk/vhost.h 00:04:02.015 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.015 TEST_HEADER include/spdk/vmd.h 00:04:02.015 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.015 TEST_HEADER include/spdk/zipf.h 00:04:02.015 TEST_HEADER include/spdk/xor.h 00:04:02.015 CXX test/cpp_headers/accel.o 00:04:02.015 CXX test/cpp_headers/accel_module.o 00:04:02.015 CXX test/cpp_headers/assert.o 00:04:02.015 CXX test/cpp_headers/barrier.o 00:04:02.015 CXX test/cpp_headers/base64.o 00:04:02.015 CXX test/cpp_headers/bdev.o 00:04:02.015 CXX test/cpp_headers/bdev_zone.o 00:04:02.015 CXX test/cpp_headers/bit_array.o 00:04:02.015 CXX test/cpp_headers/bdev_module.o 00:04:02.015 CXX test/cpp_headers/bit_pool.o 00:04:02.015 CXX test/cpp_headers/blob_bdev.o 00:04:02.015 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.015 CXX test/cpp_headers/blobfs.o 00:04:02.276 CXX test/cpp_headers/conf.o 00:04:02.276 CXX test/cpp_headers/blob.o 00:04:02.276 CXX test/cpp_headers/config.o 00:04:02.276 CXX test/cpp_headers/cpuset.o 00:04:02.276 CXX test/cpp_headers/crc16.o 00:04:02.276 CXX test/cpp_headers/crc32.o 00:04:02.276 CXX test/cpp_headers/crc64.o 00:04:02.276 CXX test/cpp_headers/dif.o 00:04:02.276 CXX test/cpp_headers/dma.o 00:04:02.276 CXX test/cpp_headers/endian.o 00:04:02.276 CXX test/cpp_headers/env_dpdk.o 00:04:02.276 CXX test/cpp_headers/env.o 00:04:02.276 CXX test/cpp_headers/fd_group.o 00:04:02.276 CXX test/cpp_headers/event.o 00:04:02.276 CXX test/cpp_headers/fd.o 00:04:02.276 CXX test/cpp_headers/file.o 00:04:02.276 CXX test/cpp_headers/hexlify.o 00:04:02.276 CXX test/cpp_headers/ftl.o 00:04:02.276 CXX test/cpp_headers/gpt_spec.o 00:04:02.276 CXX test/cpp_headers/histogram_data.o 00:04:02.276 CXX test/cpp_headers/idxd.o 00:04:02.276 CXX test/cpp_headers/idxd_spec.o 00:04:02.276 CXX test/cpp_headers/ioat_spec.o 00:04:02.276 CXX test/cpp_headers/ioat.o 00:04:02.276 CXX test/cpp_headers/init.o 00:04:02.276 CXX test/cpp_headers/iscsi_spec.o 00:04:02.276 CXX test/cpp_headers/json.o 00:04:02.276 CXX test/cpp_headers/jsonrpc.o 00:04:02.276 CXX test/cpp_headers/keyring.o 00:04:02.276 CXX test/cpp_headers/keyring_module.o 00:04:02.276 CXX test/cpp_headers/likely.o 00:04:02.276 CXX test/cpp_headers/lvol.o 00:04:02.276 CXX test/cpp_headers/log.o 00:04:02.276 CXX test/cpp_headers/memory.o 00:04:02.276 CXX test/cpp_headers/mmio.o 00:04:02.276 CXX test/cpp_headers/nbd.o 00:04:02.276 CXX test/cpp_headers/net.o 00:04:02.276 CXX test/cpp_headers/nvme.o 00:04:02.276 CXX test/cpp_headers/nvme_intel.o 00:04:02.276 CXX test/cpp_headers/notify.o 00:04:02.276 CXX test/cpp_headers/nvme_ocssd.o 00:04:02.276 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:02.276 CXX test/cpp_headers/nvme_zns.o 00:04:02.276 CXX test/cpp_headers/nvme_spec.o 00:04:02.276 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:02.276 CXX test/cpp_headers/nvmf_cmd.o 00:04:02.276 CXX test/cpp_headers/nvmf_spec.o 00:04:02.276 CXX test/cpp_headers/nvmf.o 00:04:02.276 CXX test/cpp_headers/nvmf_transport.o 00:04:02.276 CXX test/cpp_headers/pci_ids.o 00:04:02.276 CXX test/cpp_headers/opal_spec.o 00:04:02.276 CXX test/cpp_headers/opal.o 00:04:02.276 CXX test/cpp_headers/pipe.o 00:04:02.276 CC test/app/histogram_perf/histogram_perf.o 00:04:02.276 CXX test/cpp_headers/queue.o 00:04:02.276 CXX test/cpp_headers/reduce.o 00:04:02.276 CC test/env/vtophys/vtophys.o 00:04:02.276 CXX test/cpp_headers/scsi_spec.o 00:04:02.276 CXX test/cpp_headers/rpc.o 00:04:02.276 CC test/app/stub/stub.o 00:04:02.276 CXX test/cpp_headers/scheduler.o 00:04:02.276 CXX test/cpp_headers/scsi.o 00:04:02.276 CXX test/cpp_headers/sock.o 00:04:02.276 CXX test/cpp_headers/stdinc.o 00:04:02.276 CXX test/cpp_headers/thread.o 00:04:02.276 CXX test/cpp_headers/trace.o 00:04:02.276 CXX test/cpp_headers/string.o 00:04:02.276 CXX test/cpp_headers/trace_parser.o 00:04:02.276 CXX test/cpp_headers/tree.o 00:04:02.276 CXX test/cpp_headers/ublk.o 00:04:02.276 CXX test/cpp_headers/util.o 00:04:02.276 CXX test/cpp_headers/uuid.o 00:04:02.276 CC test/env/memory/memory_ut.o 00:04:02.276 CXX test/cpp_headers/version.o 00:04:02.276 CXX test/cpp_headers/vfio_user_spec.o 00:04:02.276 CXX test/cpp_headers/vhost.o 00:04:02.276 CXX test/cpp_headers/vfio_user_pci.o 00:04:02.276 CC test/thread/poller_perf/poller_perf.o 00:04:02.276 CC test/app/jsoncat/jsoncat.o 00:04:02.276 CXX test/cpp_headers/vmd.o 00:04:02.276 LINK spdk_lspci 00:04:02.276 CXX test/cpp_headers/xor.o 00:04:02.276 CC examples/util/zipf/zipf.o 00:04:02.276 CXX test/cpp_headers/zipf.o 00:04:02.276 CC examples/ioat/perf/perf.o 00:04:02.276 CC test/env/pci/pci_ut.o 00:04:02.276 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:02.276 CC app/fio/nvme/fio_plugin.o 00:04:02.276 CC examples/ioat/verify/verify.o 00:04:02.276 CC test/app/bdev_svc/bdev_svc.o 00:04:02.276 CC app/fio/bdev/fio_plugin.o 00:04:02.276 LINK rpc_client_test 00:04:02.276 CC test/dma/test_dma/test_dma.o 00:04:02.276 LINK spdk_nvme_discover 00:04:02.539 LINK nvmf_tgt 00:04:02.539 LINK spdk_trace_record 00:04:02.539 LINK iscsi_tgt 00:04:02.539 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:02.539 LINK interrupt_tgt 00:04:02.539 LINK spdk_tgt 00:04:02.539 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:02.539 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:02.539 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:02.539 CC test/env/mem_callbacks/mem_callbacks.o 00:04:02.798 LINK histogram_perf 00:04:02.798 LINK spdk_dd 00:04:02.798 LINK vtophys 00:04:02.798 LINK zipf 00:04:02.798 LINK poller_perf 00:04:02.798 LINK jsoncat 00:04:02.798 LINK stub 00:04:03.100 LINK env_dpdk_post_init 00:04:03.100 LINK bdev_svc 00:04:03.100 LINK ioat_perf 00:04:03.100 LINK test_dma 00:04:03.100 LINK spdk_trace 00:04:03.100 LINK verify 00:04:03.100 LINK nvme_fuzz 00:04:03.100 LINK vhost_fuzz 00:04:03.100 LINK pci_ut 00:04:03.364 LINK spdk_bdev 00:04:03.364 LINK spdk_nvme_identify 00:04:03.364 LINK spdk_nvme 00:04:03.364 LINK spdk_nvme_perf 00:04:03.364 CC examples/vmd/led/led.o 00:04:03.364 CC test/event/reactor/reactor.o 00:04:03.364 CC examples/idxd/perf/perf.o 00:04:03.364 CC test/event/event_perf/event_perf.o 00:04:03.364 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.364 LINK spdk_top 00:04:03.364 CC examples/sock/hello_world/hello_sock.o 00:04:03.364 CC test/event/reactor_perf/reactor_perf.o 00:04:03.364 CC examples/thread/thread/thread_ex.o 00:04:03.364 CC test/event/app_repeat/app_repeat.o 00:04:03.364 CC app/vhost/vhost.o 00:04:03.364 CC test/event/scheduler/scheduler.o 00:04:03.364 LINK mem_callbacks 00:04:03.689 LINK lsvmd 00:04:03.689 LINK reactor 00:04:03.689 LINK event_perf 00:04:03.689 LINK led 00:04:03.689 CC test/nvme/err_injection/err_injection.o 00:04:03.689 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.689 LINK reactor_perf 00:04:03.689 LINK app_repeat 00:04:03.689 CC test/nvme/sgl/sgl.o 00:04:03.689 CC test/nvme/overhead/overhead.o 00:04:03.689 CC test/nvme/reserve/reserve.o 00:04:03.689 CC test/nvme/reset/reset.o 00:04:03.689 CC test/nvme/startup/startup.o 00:04:03.689 CC test/nvme/connect_stress/connect_stress.o 00:04:03.689 CC test/nvme/e2edp/nvme_dp.o 00:04:03.689 CC test/nvme/fdp/fdp.o 00:04:03.689 CC test/nvme/compliance/nvme_compliance.o 00:04:03.689 CC test/nvme/aer/aer.o 00:04:03.689 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.689 CC test/nvme/boot_partition/boot_partition.o 00:04:03.689 CC test/nvme/simple_copy/simple_copy.o 00:04:03.689 CC test/nvme/cuse/cuse.o 00:04:03.689 CC test/blobfs/mkfs/mkfs.o 00:04:03.689 CC test/accel/dif/dif.o 00:04:03.689 LINK vhost 00:04:03.689 LINK hello_sock 00:04:03.689 LINK scheduler 00:04:03.689 LINK idxd_perf 00:04:03.689 LINK thread 00:04:03.689 CC test/lvol/esnap/esnap.o 00:04:03.689 LINK startup 00:04:03.689 LINK connect_stress 00:04:03.689 LINK doorbell_aers 00:04:03.689 LINK err_injection 00:04:03.689 LINK fused_ordering 00:04:03.689 LINK reserve 00:04:03.689 LINK boot_partition 00:04:03.689 LINK memory_ut 00:04:03.950 LINK nvme_dp 00:04:03.950 LINK simple_copy 00:04:03.950 LINK reset 00:04:03.950 LINK mkfs 00:04:03.950 LINK sgl 00:04:03.950 LINK overhead 00:04:03.950 LINK fdp 00:04:03.950 LINK aer 00:04:03.950 LINK nvme_compliance 00:04:03.950 LINK dif 00:04:04.211 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:04.211 CC examples/nvme/abort/abort.o 00:04:04.211 CC examples/nvme/hotplug/hotplug.o 00:04:04.211 CC examples/nvme/reconnect/reconnect.o 00:04:04.211 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:04.211 CC examples/nvme/hello_world/hello_world.o 00:04:04.211 CC examples/nvme/arbitration/arbitration.o 00:04:04.211 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:04.211 LINK iscsi_fuzz 00:04:04.211 CC examples/accel/perf/accel_perf.o 00:04:04.211 CC examples/blob/cli/blobcli.o 00:04:04.211 CC examples/blob/hello_world/hello_blob.o 00:04:04.211 LINK cmb_copy 00:04:04.471 LINK pmr_persistence 00:04:04.471 LINK hello_world 00:04:04.471 LINK hotplug 00:04:04.471 LINK arbitration 00:04:04.471 LINK reconnect 00:04:04.471 LINK abort 00:04:04.471 LINK hello_blob 00:04:04.471 CC test/bdev/bdevio/bdevio.o 00:04:04.471 LINK blobcli 00:04:04.471 LINK nvme_manage 00:04:04.733 LINK accel_perf 00:04:04.733 LINK cuse 00:04:04.994 LINK bdevio 00:04:05.254 CC examples/bdev/hello_world/hello_bdev.o 00:04:05.254 CC examples/bdev/bdevperf/bdevperf.o 00:04:05.516 LINK hello_bdev 00:04:05.777 LINK bdevperf 00:04:06.350 CC examples/nvmf/nvmf/nvmf.o 00:04:06.922 LINK nvmf 00:04:07.864 LINK esnap 00:04:08.126 00:04:08.126 real 0m51.335s 00:04:08.126 user 6m33.816s 00:04:08.126 sys 4m13.189s 00:04:08.126 07:10:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:08.126 07:10:15 make -- common/autotest_common.sh@10 -- $ set +x 00:04:08.126 ************************************ 00:04:08.126 END TEST make 00:04:08.126 ************************************ 00:04:08.388 07:10:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:08.388 07:10:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:08.388 07:10:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:08.389 07:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:08.389 07:10:15 -- pm/common@44 -- $ pid=3954713 00:04:08.389 07:10:15 -- pm/common@50 -- $ kill -TERM 3954713 00:04:08.389 07:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:08.389 07:10:15 -- pm/common@44 -- $ pid=3954714 00:04:08.389 07:10:15 -- pm/common@50 -- $ kill -TERM 3954714 00:04:08.389 07:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:08.389 07:10:15 -- pm/common@44 -- $ pid=3954716 00:04:08.389 07:10:15 -- pm/common@50 -- $ kill -TERM 3954716 00:04:08.389 07:10:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:08.389 07:10:15 -- pm/common@44 -- $ pid=3954740 00:04:08.389 07:10:15 -- pm/common@50 -- $ sudo -E kill -TERM 3954740 00:04:08.389 07:10:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:08.389 07:10:15 -- nvmf/common.sh@7 -- # uname -s 00:04:08.389 07:10:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.389 07:10:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.389 07:10:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.389 07:10:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.389 07:10:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.389 07:10:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.389 07:10:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.389 07:10:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.389 07:10:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.389 07:10:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.389 07:10:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:08.389 07:10:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:08.389 07:10:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.389 07:10:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.389 07:10:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:08.389 07:10:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.389 07:10:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:08.389 07:10:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.389 07:10:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.389 07:10:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.389 07:10:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.389 07:10:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.389 07:10:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.389 07:10:15 -- paths/export.sh@5 -- # export PATH 00:04:08.389 07:10:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.389 07:10:15 -- nvmf/common.sh@47 -- # : 0 00:04:08.389 07:10:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:08.389 07:10:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:08.389 07:10:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.389 07:10:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.389 07:10:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.389 07:10:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:08.389 07:10:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:08.389 07:10:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:08.389 07:10:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:08.389 07:10:15 -- spdk/autotest.sh@32 -- # uname -s 00:04:08.389 07:10:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:08.389 07:10:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:08.389 07:10:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:08.389 07:10:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:08.389 07:10:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:08.389 07:10:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:08.389 07:10:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:08.389 07:10:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:08.389 07:10:15 -- spdk/autotest.sh@48 -- # udevadm_pid=4018408 00:04:08.389 07:10:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:08.389 07:10:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.389 07:10:15 -- pm/common@17 -- # local monitor 00:04:08.389 07:10:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.389 07:10:15 -- pm/common@21 -- # date +%s 00:04:08.389 07:10:15 -- pm/common@21 -- # date +%s 00:04:08.389 07:10:15 -- pm/common@25 -- # sleep 1 00:04:08.389 07:10:15 -- pm/common@21 -- # date +%s 00:04:08.389 07:10:15 -- pm/common@21 -- # date +%s 00:04:08.389 07:10:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884215 00:04:08.389 07:10:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884215 00:04:08.389 07:10:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884215 00:04:08.389 07:10:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721884215 00:04:08.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884215_collect-vmstat.pm.log 00:04:08.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884215_collect-cpu-load.pm.log 00:04:08.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884215_collect-cpu-temp.pm.log 00:04:08.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721884215_collect-bmc-pm.bmc.pm.log 00:04:09.594 07:10:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.594 07:10:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:09.594 07:10:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.594 07:10:16 -- common/autotest_common.sh@10 -- # set +x 00:04:09.594 07:10:16 -- spdk/autotest.sh@59 -- # create_test_list 00:04:09.594 07:10:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:09.594 07:10:16 -- common/autotest_common.sh@10 -- # set +x 00:04:09.594 07:10:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:09.594 07:10:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.594 07:10:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.594 07:10:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:09.594 07:10:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.594 07:10:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:09.595 07:10:16 -- common/autotest_common.sh@1455 -- # uname 00:04:09.595 07:10:16 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:09.595 07:10:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:09.595 07:10:16 -- common/autotest_common.sh@1475 -- # uname 00:04:09.595 07:10:16 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:09.595 07:10:16 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:09.595 07:10:16 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:09.595 07:10:16 -- spdk/autotest.sh@72 -- # hash lcov 00:04:09.595 07:10:16 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:09.595 07:10:16 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:09.595 --rc lcov_branch_coverage=1 00:04:09.595 --rc lcov_function_coverage=1 00:04:09.595 --rc genhtml_branch_coverage=1 00:04:09.595 --rc genhtml_function_coverage=1 00:04:09.595 --rc genhtml_legend=1 00:04:09.595 --rc geninfo_all_blocks=1 00:04:09.595 ' 00:04:09.595 07:10:16 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:09.595 --rc lcov_branch_coverage=1 00:04:09.595 --rc lcov_function_coverage=1 00:04:09.595 --rc genhtml_branch_coverage=1 00:04:09.595 --rc genhtml_function_coverage=1 00:04:09.595 --rc genhtml_legend=1 00:04:09.595 --rc geninfo_all_blocks=1 00:04:09.595 ' 00:04:09.595 07:10:16 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:09.595 --rc lcov_branch_coverage=1 00:04:09.595 --rc lcov_function_coverage=1 00:04:09.595 --rc genhtml_branch_coverage=1 00:04:09.595 --rc genhtml_function_coverage=1 00:04:09.595 --rc genhtml_legend=1 00:04:09.595 --rc geninfo_all_blocks=1 00:04:09.595 --no-external' 00:04:09.595 07:10:16 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:09.595 --rc lcov_branch_coverage=1 00:04:09.595 --rc lcov_function_coverage=1 00:04:09.595 --rc genhtml_branch_coverage=1 00:04:09.595 --rc genhtml_function_coverage=1 00:04:09.595 --rc genhtml_legend=1 00:04:09.595 --rc geninfo_all_blocks=1 00:04:09.595 --no-external' 00:04:09.595 07:10:16 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:09.595 lcov: LCOV version 1.14 00:04:09.595 07:10:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:10.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:10.982 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:11.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:11.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:11.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:11.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:11.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:11.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:11.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:11.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:11.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:11.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:11.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:11.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:11.507 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:11.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:11.768 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:26.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:26.694 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:41.679 07:10:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:41.679 07:10:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.679 07:10:46 -- common/autotest_common.sh@10 -- # set +x 00:04:41.679 07:10:46 -- spdk/autotest.sh@91 -- # rm -f 00:04:41.679 07:10:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.622 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:42.622 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:42.884 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:42.884 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:42.884 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:42.884 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:42.884 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:42.884 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:42.884 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:42.884 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:43.145 07:10:50 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:43.145 07:10:50 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:43.145 07:10:50 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:43.145 07:10:50 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:43.145 07:10:50 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:43.145 07:10:50 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:43.145 07:10:50 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:43.145 07:10:50 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:43.145 07:10:50 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:43.145 07:10:50 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:43.145 07:10:50 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:43.145 07:10:50 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:43.145 07:10:50 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:43.145 07:10:50 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:43.145 07:10:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:43.145 No valid GPT data, bailing 00:04:43.145 07:10:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:43.145 07:10:50 -- scripts/common.sh@391 -- # pt= 00:04:43.145 07:10:50 -- scripts/common.sh@392 -- # return 1 00:04:43.145 07:10:50 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:43.145 1+0 records in 00:04:43.145 1+0 records out 00:04:43.145 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442141 s, 237 MB/s 00:04:43.145 07:10:50 -- spdk/autotest.sh@118 -- # sync 00:04:43.145 07:10:50 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:43.145 07:10:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:43.145 07:10:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:51.290 07:10:58 -- spdk/autotest.sh@124 -- # uname -s 00:04:51.290 07:10:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:51.290 07:10:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:51.290 07:10:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.290 07:10:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.290 07:10:58 -- common/autotest_common.sh@10 -- # set +x 00:04:51.290 ************************************ 00:04:51.290 START TEST setup.sh 00:04:51.290 ************************************ 00:04:51.290 07:10:58 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:51.290 * Looking for test storage... 00:04:51.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:51.290 07:10:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:51.290 07:10:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:51.290 07:10:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:51.290 07:10:58 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.290 07:10:58 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.290 07:10:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.552 ************************************ 00:04:51.552 START TEST acl 00:04:51.552 ************************************ 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:51.552 * Looking for test storage... 00:04:51.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:51.552 07:10:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:51.552 07:10:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:51.552 07:10:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:51.552 07:10:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:51.552 07:10:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:51.552 07:10:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:51.552 07:10:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:51.552 07:10:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.552 07:10:58 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:55.763 07:11:02 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:55.763 07:11:02 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:55.763 07:11:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.763 07:11:02 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:55.763 07:11:02 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.763 07:11:02 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:59.060 Hugepages 00:04:59.060 node hugesize free / total 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 00:04:59.060 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.060 07:11:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:59.061 07:11:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:59.061 07:11:06 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.061 07:11:06 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.061 07:11:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:59.061 ************************************ 00:04:59.061 START TEST denied 00:04:59.061 ************************************ 00:04:59.061 07:11:06 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:59.061 07:11:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:59.061 07:11:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:59.061 07:11:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:59.061 07:11:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.061 07:11:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.269 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:03.269 07:11:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:03.270 07:11:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.270 07:11:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.479 00:05:07.479 real 0m8.645s 00:05:07.479 user 0m2.969s 00:05:07.479 sys 0m4.935s 00:05:07.479 07:11:14 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.479 07:11:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:07.479 ************************************ 00:05:07.479 END TEST denied 00:05:07.479 ************************************ 00:05:07.479 07:11:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:07.479 07:11:14 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.479 07:11:14 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.479 07:11:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:07.741 ************************************ 00:05:07.741 START TEST allowed 00:05:07.741 ************************************ 00:05:07.741 07:11:14 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:07.741 07:11:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:05:07.741 07:11:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:07.741 07:11:14 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:05:07.741 07:11:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.741 07:11:14 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.105 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:13.105 07:11:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:13.105 07:11:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:13.105 07:11:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:13.105 07:11:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.105 07:11:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:16.411 00:05:16.411 real 0m8.774s 00:05:16.411 user 0m2.307s 00:05:16.411 sys 0m4.507s 00:05:16.411 07:11:23 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.411 07:11:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:16.411 ************************************ 00:05:16.411 END TEST allowed 00:05:16.411 ************************************ 00:05:16.411 00:05:16.411 real 0m25.021s 00:05:16.411 user 0m8.031s 00:05:16.411 sys 0m14.452s 00:05:16.411 07:11:23 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.411 07:11:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:16.411 ************************************ 00:05:16.411 END TEST acl 00:05:16.411 ************************************ 00:05:16.411 07:11:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:16.411 07:11:23 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.411 07:11:23 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.411 07:11:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:16.411 ************************************ 00:05:16.411 START TEST hugepages 00:05:16.411 ************************************ 00:05:16.411 07:11:23 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:16.674 * Looking for test storage... 00:05:16.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102572792 kB' 'MemAvailable: 106288860 kB' 'Buffers: 2704 kB' 'Cached: 14734776 kB' 'SwapCached: 0 kB' 'Active: 11580788 kB' 'Inactive: 3693560 kB' 'Active(anon): 11100988 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540112 kB' 'Mapped: 202324 kB' 'Shmem: 10564120 kB' 'KReclaimable: 583652 kB' 'Slab: 1465536 kB' 'SReclaimable: 583652 kB' 'SUnreclaim: 881884 kB' 'KernelStack: 27280 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12680172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.674 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:16.675 07:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:16.676 07:11:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:16.676 07:11:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.676 07:11:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.676 07:11:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:16.676 ************************************ 00:05:16.676 START TEST default_setup 00:05:16.676 ************************************ 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.676 07:11:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.987 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:19.987 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104776016 kB' 'MemAvailable: 108492084 kB' 'Buffers: 2704 kB' 'Cached: 14734896 kB' 'SwapCached: 0 kB' 'Active: 11597352 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117552 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556064 kB' 'Mapped: 202572 kB' 'Shmem: 10564240 kB' 'KReclaimable: 583652 kB' 'Slab: 1462280 kB' 'SReclaimable: 583652 kB' 'SUnreclaim: 878628 kB' 'KernelStack: 27376 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12697096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236104 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:19.987 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.988 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104775484 kB' 'MemAvailable: 108491552 kB' 'Buffers: 2704 kB' 'Cached: 14734900 kB' 'SwapCached: 0 kB' 'Active: 11597504 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117704 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556320 kB' 'Mapped: 202516 kB' 'Shmem: 10564244 kB' 'KReclaimable: 583652 kB' 'Slab: 1462152 kB' 'SReclaimable: 583652 kB' 'SUnreclaim: 878500 kB' 'KernelStack: 27392 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12695500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236024 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.989 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.990 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.256 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104773292 kB' 'MemAvailable: 108489360 kB' 'Buffers: 2704 kB' 'Cached: 14734916 kB' 'SwapCached: 0 kB' 'Active: 11597120 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117320 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556340 kB' 'Mapped: 202440 kB' 'Shmem: 10564260 kB' 'KReclaimable: 583652 kB' 'Slab: 1462156 kB' 'SReclaimable: 583652 kB' 'SUnreclaim: 878504 kB' 'KernelStack: 27456 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12695520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236040 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.257 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.258 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.259 nr_hugepages=1024 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.259 resv_hugepages=0 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.259 surplus_hugepages=0 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.259 anon_hugepages=0 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104774504 kB' 'MemAvailable: 108490572 kB' 'Buffers: 2704 kB' 'Cached: 14734940 kB' 'SwapCached: 0 kB' 'Active: 11596808 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117008 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556060 kB' 'Mapped: 202440 kB' 'Shmem: 10564284 kB' 'KReclaimable: 583652 kB' 'Slab: 1462156 kB' 'SReclaimable: 583652 kB' 'SUnreclaim: 878504 kB' 'KernelStack: 27264 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12694296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.259 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.260 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57567048 kB' 'MemUsed: 8091960 kB' 'SwapCached: 0 kB' 'Active: 3156396 kB' 'Inactive: 235936 kB' 'Active(anon): 2916972 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3124196 kB' 'Mapped: 89172 kB' 'AnonPages: 271388 kB' 'Shmem: 2648836 kB' 'KernelStack: 15352 kB' 'PageTables: 5636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275564 kB' 'Slab: 787684 kB' 'SReclaimable: 275564 kB' 'SUnreclaim: 512120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.261 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.262 node0=1024 expecting 1024 00:05:20.262 07:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.263 00:05:20.263 real 0m3.488s 00:05:20.263 user 0m1.149s 00:05:20.263 sys 0m2.336s 00:05:20.263 07:11:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.263 07:11:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:20.263 ************************************ 00:05:20.263 END TEST default_setup 00:05:20.263 ************************************ 00:05:20.263 07:11:27 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:20.263 07:11:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.263 07:11:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.263 07:11:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.263 ************************************ 00:05:20.263 START TEST per_node_1G_alloc 00:05:20.263 ************************************ 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.263 07:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.571 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:23.571 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:23.571 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104752500 kB' 'MemAvailable: 108468568 kB' 'Buffers: 2704 kB' 'Cached: 14735052 kB' 'SwapCached: 0 kB' 'Active: 11594992 kB' 'Inactive: 3693560 kB' 'Active(anon): 11115192 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553560 kB' 'Mapped: 201960 kB' 'Shmem: 10564396 kB' 'KReclaimable: 583652 kB' 'Slab: 1462092 kB' 'SReclaimable: 583652 kB' 'SUnreclaim: 878440 kB' 'KernelStack: 27120 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12684316 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235976 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:23.833 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.100 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.100 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.100 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.100 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.100 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.100 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.101 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104749504 kB' 'MemAvailable: 108465508 kB' 'Buffers: 2704 kB' 'Cached: 14735056 kB' 'SwapCached: 0 kB' 'Active: 11597780 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117980 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556864 kB' 'Mapped: 201880 kB' 'Shmem: 10564400 kB' 'KReclaimable: 583588 kB' 'Slab: 1461980 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878392 kB' 'KernelStack: 27104 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12687508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104745308 kB' 'MemAvailable: 108461312 kB' 'Buffers: 2704 kB' 'Cached: 14735072 kB' 'SwapCached: 0 kB' 'Active: 11599816 kB' 'Inactive: 3693560 kB' 'Active(anon): 11120016 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558904 kB' 'Mapped: 202236 kB' 'Shmem: 10564416 kB' 'KReclaimable: 583588 kB' 'Slab: 1461980 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878392 kB' 'KernelStack: 27088 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12689396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235948 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.104 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.105 nr_hugepages=1024 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.105 resv_hugepages=0 00:05:24.105 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.106 surplus_hugepages=0 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.106 anon_hugepages=0 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104745220 kB' 'MemAvailable: 108461224 kB' 'Buffers: 2704 kB' 'Cached: 14735092 kB' 'SwapCached: 0 kB' 'Active: 11594400 kB' 'Inactive: 3693560 kB' 'Active(anon): 11114600 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553500 kB' 'Mapped: 201376 kB' 'Shmem: 10564436 kB' 'KReclaimable: 583588 kB' 'Slab: 1461980 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878392 kB' 'KernelStack: 27120 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12685828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235912 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.107 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58598024 kB' 'MemUsed: 7060984 kB' 'SwapCached: 0 kB' 'Active: 3154048 kB' 'Inactive: 235936 kB' 'Active(anon): 2914624 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3124280 kB' 'Mapped: 88244 kB' 'AnonPages: 268864 kB' 'Shmem: 2648920 kB' 'KernelStack: 15288 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275564 kB' 'Slab: 787644 kB' 'SReclaimable: 275564 kB' 'SUnreclaim: 512080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.108 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46147588 kB' 'MemUsed: 14532248 kB' 'SwapCached: 0 kB' 'Active: 8440576 kB' 'Inactive: 3457624 kB' 'Active(anon): 8200200 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11613560 kB' 'Mapped: 113124 kB' 'AnonPages: 284836 kB' 'Shmem: 7915560 kB' 'KernelStack: 11864 kB' 'PageTables: 3288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 308024 kB' 'Slab: 674336 kB' 'SReclaimable: 308024 kB' 'SUnreclaim: 366312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.109 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.110 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:24.111 node0=512 expecting 512 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:24.111 node1=512 expecting 512 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:24.111 00:05:24.111 real 0m3.852s 00:05:24.111 user 0m1.506s 00:05:24.111 sys 0m2.409s 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.111 07:11:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:24.111 ************************************ 00:05:24.111 END TEST per_node_1G_alloc 00:05:24.111 ************************************ 00:05:24.111 07:11:31 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:24.111 07:11:31 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.111 07:11:31 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.111 07:11:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.372 ************************************ 00:05:24.372 START TEST even_2G_alloc 00:05:24.372 ************************************ 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.372 07:11:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:27.675 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:27.675 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:27.676 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:27.676 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:27.676 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:27.676 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:27.676 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:27.676 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:27.676 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104746272 kB' 'MemAvailable: 108462276 kB' 'Buffers: 2704 kB' 'Cached: 14735252 kB' 'SwapCached: 0 kB' 'Active: 11596004 kB' 'Inactive: 3693560 kB' 'Active(anon): 11116204 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554900 kB' 'Mapped: 201524 kB' 'Shmem: 10564596 kB' 'KReclaimable: 583588 kB' 'Slab: 1461544 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 877956 kB' 'KernelStack: 27200 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12684556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235928 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.942 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.943 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104746552 kB' 'MemAvailable: 108462556 kB' 'Buffers: 2704 kB' 'Cached: 14735256 kB' 'SwapCached: 0 kB' 'Active: 11595532 kB' 'Inactive: 3693560 kB' 'Active(anon): 11115732 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554412 kB' 'Mapped: 201408 kB' 'Shmem: 10564600 kB' 'KReclaimable: 583588 kB' 'Slab: 1461568 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 877980 kB' 'KernelStack: 27152 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12684576 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.944 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.945 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104746368 kB' 'MemAvailable: 108462372 kB' 'Buffers: 2704 kB' 'Cached: 14735272 kB' 'SwapCached: 0 kB' 'Active: 11595552 kB' 'Inactive: 3693560 kB' 'Active(anon): 11115752 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554416 kB' 'Mapped: 201408 kB' 'Shmem: 10564616 kB' 'KReclaimable: 583588 kB' 'Slab: 1461568 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 877980 kB' 'KernelStack: 27152 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12684596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.946 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.947 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:27.948 nr_hugepages=1024 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:27.948 resv_hugepages=0 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:27.948 surplus_hugepages=0 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:27.948 anon_hugepages=0 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104746872 kB' 'MemAvailable: 108462876 kB' 'Buffers: 2704 kB' 'Cached: 14735272 kB' 'SwapCached: 0 kB' 'Active: 11595552 kB' 'Inactive: 3693560 kB' 'Active(anon): 11115752 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554440 kB' 'Mapped: 201408 kB' 'Shmem: 10564616 kB' 'KReclaimable: 583588 kB' 'Slab: 1461568 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 877980 kB' 'KernelStack: 27152 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12684620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.948 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:27.949 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58603300 kB' 'MemUsed: 7055708 kB' 'SwapCached: 0 kB' 'Active: 3154040 kB' 'Inactive: 235936 kB' 'Active(anon): 2914616 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3124432 kB' 'Mapped: 88244 kB' 'AnonPages: 268656 kB' 'Shmem: 2649072 kB' 'KernelStack: 15272 kB' 'PageTables: 5320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275564 kB' 'Slab: 787460 kB' 'SReclaimable: 275564 kB' 'SUnreclaim: 511896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.950 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46143640 kB' 'MemUsed: 14536196 kB' 'SwapCached: 0 kB' 'Active: 8441212 kB' 'Inactive: 3457624 kB' 'Active(anon): 8200836 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11613608 kB' 'Mapped: 113164 kB' 'AnonPages: 285348 kB' 'Shmem: 7915608 kB' 'KernelStack: 11864 kB' 'PageTables: 3236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 308024 kB' 'Slab: 674108 kB' 'SReclaimable: 308024 kB' 'SUnreclaim: 366084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:27.951 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.213 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:28.214 node0=512 expecting 512 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:28.214 node1=512 expecting 512 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:28.214 00:05:28.214 real 0m3.861s 00:05:28.214 user 0m1.575s 00:05:28.214 sys 0m2.345s 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.214 07:11:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:28.214 ************************************ 00:05:28.214 END TEST even_2G_alloc 00:05:28.214 ************************************ 00:05:28.214 07:11:35 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:28.214 07:11:35 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.214 07:11:35 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.214 07:11:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:28.214 ************************************ 00:05:28.214 START TEST odd_alloc 00:05:28.214 ************************************ 00:05:28.214 07:11:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:28.214 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:28.214 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:28.214 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.215 07:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:31.518 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:31.518 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:31.518 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104762304 kB' 'MemAvailable: 108478308 kB' 'Buffers: 2704 kB' 'Cached: 14735428 kB' 'SwapCached: 0 kB' 'Active: 11596964 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117164 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555624 kB' 'Mapped: 201456 kB' 'Shmem: 10564772 kB' 'KReclaimable: 583588 kB' 'Slab: 1461628 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878040 kB' 'KernelStack: 27248 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12688528 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236168 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.823 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104763296 kB' 'MemAvailable: 108479300 kB' 'Buffers: 2704 kB' 'Cached: 14735432 kB' 'SwapCached: 0 kB' 'Active: 11597420 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117620 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556032 kB' 'Mapped: 201400 kB' 'Shmem: 10564776 kB' 'KReclaimable: 583588 kB' 'Slab: 1461564 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 877976 kB' 'KernelStack: 27296 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12688548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236136 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.824 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.825 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104763284 kB' 'MemAvailable: 108479288 kB' 'Buffers: 2704 kB' 'Cached: 14735448 kB' 'SwapCached: 0 kB' 'Active: 11596664 kB' 'Inactive: 3693560 kB' 'Active(anon): 11116864 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555348 kB' 'Mapped: 201400 kB' 'Shmem: 10564792 kB' 'KReclaimable: 583588 kB' 'Slab: 1461580 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 877992 kB' 'KernelStack: 27184 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12688568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236120 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.826 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.827 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:31.828 nr_hugepages=1025 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:31.828 resv_hugepages=0 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:31.828 surplus_hugepages=0 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:31.828 anon_hugepages=0 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104762120 kB' 'MemAvailable: 108478124 kB' 'Buffers: 2704 kB' 'Cached: 14735468 kB' 'SwapCached: 0 kB' 'Active: 11597004 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117204 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555660 kB' 'Mapped: 201400 kB' 'Shmem: 10564812 kB' 'KReclaimable: 583588 kB' 'Slab: 1461548 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 877960 kB' 'KernelStack: 27248 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12686972 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236088 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.828 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:31.829 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:32.103 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58598912 kB' 'MemUsed: 7060096 kB' 'SwapCached: 0 kB' 'Active: 3154856 kB' 'Inactive: 235936 kB' 'Active(anon): 2915432 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3124532 kB' 'Mapped: 88244 kB' 'AnonPages: 269448 kB' 'Shmem: 2649172 kB' 'KernelStack: 15464 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275564 kB' 'Slab: 787452 kB' 'SReclaimable: 275564 kB' 'SUnreclaim: 511888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.104 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46164992 kB' 'MemUsed: 14514844 kB' 'SwapCached: 0 kB' 'Active: 8442316 kB' 'Inactive: 3457624 kB' 'Active(anon): 8201940 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11613664 kB' 'Mapped: 113164 kB' 'AnonPages: 286284 kB' 'Shmem: 7915664 kB' 'KernelStack: 11848 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 308024 kB' 'Slab: 674080 kB' 'SReclaimable: 308024 kB' 'SUnreclaim: 366056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.105 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:32.106 node0=512 expecting 513 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:32.106 node1=513 expecting 512 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:32.106 00:05:32.106 real 0m3.823s 00:05:32.106 user 0m1.540s 00:05:32.106 sys 0m2.343s 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.106 07:11:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.106 ************************************ 00:05:32.106 END TEST odd_alloc 00:05:32.106 ************************************ 00:05:32.106 07:11:39 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:32.106 07:11:39 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.106 07:11:39 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.106 07:11:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.106 ************************************ 00:05:32.106 START TEST custom_alloc 00:05:32.106 ************************************ 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:32.106 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.107 07:11:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:35.414 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:35.414 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.414 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103740308 kB' 'MemAvailable: 107456312 kB' 'Buffers: 2704 kB' 'Cached: 14735604 kB' 'SwapCached: 0 kB' 'Active: 11598012 kB' 'Inactive: 3693560 kB' 'Active(anon): 11118212 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556004 kB' 'Mapped: 201568 kB' 'Shmem: 10564948 kB' 'KReclaimable: 583588 kB' 'Slab: 1462044 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878456 kB' 'KernelStack: 26976 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12689692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235912 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.415 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103742832 kB' 'MemAvailable: 107458836 kB' 'Buffers: 2704 kB' 'Cached: 14735604 kB' 'SwapCached: 0 kB' 'Active: 11598364 kB' 'Inactive: 3693560 kB' 'Active(anon): 11118564 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556864 kB' 'Mapped: 201504 kB' 'Shmem: 10564948 kB' 'KReclaimable: 583588 kB' 'Slab: 1462140 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878552 kB' 'KernelStack: 27152 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12687992 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235944 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.416 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.417 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103742812 kB' 'MemAvailable: 107458816 kB' 'Buffers: 2704 kB' 'Cached: 14735620 kB' 'SwapCached: 0 kB' 'Active: 11597288 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117488 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555748 kB' 'Mapped: 201496 kB' 'Shmem: 10564964 kB' 'KReclaimable: 583588 kB' 'Slab: 1462140 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878552 kB' 'KernelStack: 27120 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12688012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.418 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.685 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:35.686 nr_hugepages=1536 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.686 resv_hugepages=0 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.686 surplus_hugepages=0 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.686 anon_hugepages=0 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103742016 kB' 'MemAvailable: 107458020 kB' 'Buffers: 2704 kB' 'Cached: 14735648 kB' 'SwapCached: 0 kB' 'Active: 11597204 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117404 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555596 kB' 'Mapped: 201436 kB' 'Shmem: 10564992 kB' 'KReclaimable: 583588 kB' 'Slab: 1461692 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878104 kB' 'KernelStack: 27200 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12689752 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.686 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.687 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58606416 kB' 'MemUsed: 7052592 kB' 'SwapCached: 0 kB' 'Active: 3155176 kB' 'Inactive: 235936 kB' 'Active(anon): 2915752 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3124660 kB' 'Mapped: 88244 kB' 'AnonPages: 269580 kB' 'Shmem: 2649300 kB' 'KernelStack: 15304 kB' 'PageTables: 5420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275564 kB' 'Slab: 787424 kB' 'SReclaimable: 275564 kB' 'SUnreclaim: 511860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.688 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.689 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45137696 kB' 'MemUsed: 15542140 kB' 'SwapCached: 0 kB' 'Active: 8441848 kB' 'Inactive: 3457624 kB' 'Active(anon): 8201472 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11613712 kB' 'Mapped: 113192 kB' 'AnonPages: 285836 kB' 'Shmem: 7915712 kB' 'KernelStack: 11992 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 308024 kB' 'Slab: 674268 kB' 'SReclaimable: 308024 kB' 'SUnreclaim: 366244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.690 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:35.691 node0=512 expecting 512 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:35.691 node1=1024 expecting 1024 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:35.691 00:05:35.691 real 0m3.591s 00:05:35.691 user 0m1.408s 00:05:35.691 sys 0m2.215s 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.691 07:11:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 END TEST custom_alloc 00:05:35.691 ************************************ 00:05:35.691 07:11:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:35.691 07:11:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.691 07:11:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.691 07:11:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:35.691 ************************************ 00:05:35.691 START TEST no_shrink_alloc 00:05:35.691 ************************************ 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:35.691 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.692 07:11:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:38.996 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:38.996 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:38.996 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.263 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104796664 kB' 'MemAvailable: 108512668 kB' 'Buffers: 2704 kB' 'Cached: 14735780 kB' 'SwapCached: 0 kB' 'Active: 11597688 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117888 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556468 kB' 'Mapped: 201596 kB' 'Shmem: 10565124 kB' 'KReclaimable: 583588 kB' 'Slab: 1462192 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878604 kB' 'KernelStack: 27376 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12690536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236216 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.264 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104796752 kB' 'MemAvailable: 108512756 kB' 'Buffers: 2704 kB' 'Cached: 14735780 kB' 'SwapCached: 0 kB' 'Active: 11598248 kB' 'Inactive: 3693560 kB' 'Active(anon): 11118448 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556504 kB' 'Mapped: 201540 kB' 'Shmem: 10565124 kB' 'KReclaimable: 583588 kB' 'Slab: 1462328 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878740 kB' 'KernelStack: 27312 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12690552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236088 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.265 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.266 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104797444 kB' 'MemAvailable: 108513448 kB' 'Buffers: 2704 kB' 'Cached: 14735800 kB' 'SwapCached: 0 kB' 'Active: 11598056 kB' 'Inactive: 3693560 kB' 'Active(anon): 11118256 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556296 kB' 'Mapped: 201464 kB' 'Shmem: 10565144 kB' 'KReclaimable: 583588 kB' 'Slab: 1462324 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878736 kB' 'KernelStack: 27328 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12687480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236072 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.267 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.268 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:39.269 nr_hugepages=1024 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.269 resv_hugepages=0 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.269 surplus_hugepages=0 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.269 anon_hugepages=0 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104801052 kB' 'MemAvailable: 108517056 kB' 'Buffers: 2704 kB' 'Cached: 14735820 kB' 'SwapCached: 0 kB' 'Active: 11597564 kB' 'Inactive: 3693560 kB' 'Active(anon): 11117764 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555804 kB' 'Mapped: 201464 kB' 'Shmem: 10565164 kB' 'KReclaimable: 583588 kB' 'Slab: 1462292 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878704 kB' 'KernelStack: 27104 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12687500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.269 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.270 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57565420 kB' 'MemUsed: 8093588 kB' 'SwapCached: 0 kB' 'Active: 3157044 kB' 'Inactive: 235936 kB' 'Active(anon): 2917620 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3124804 kB' 'Mapped: 88244 kB' 'AnonPages: 271368 kB' 'Shmem: 2649444 kB' 'KernelStack: 15240 kB' 'PageTables: 5172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275564 kB' 'Slab: 787708 kB' 'SReclaimable: 275564 kB' 'SUnreclaim: 512144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.271 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.272 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:39.273 node0=1024 expecting 1024 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.273 07:11:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:42.578 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:42.578 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:42.578 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:42.840 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104790488 kB' 'MemAvailable: 108506492 kB' 'Buffers: 2704 kB' 'Cached: 14735936 kB' 'SwapCached: 0 kB' 'Active: 11599792 kB' 'Inactive: 3693560 kB' 'Active(anon): 11119992 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558324 kB' 'Mapped: 201548 kB' 'Shmem: 10565280 kB' 'KReclaimable: 583588 kB' 'Slab: 1462144 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878556 kB' 'KernelStack: 27152 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12688724 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.840 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:42.841 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.107 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104793776 kB' 'MemAvailable: 108509780 kB' 'Buffers: 2704 kB' 'Cached: 14735940 kB' 'SwapCached: 0 kB' 'Active: 11599720 kB' 'Inactive: 3693560 kB' 'Active(anon): 11119920 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558212 kB' 'Mapped: 201488 kB' 'Shmem: 10565284 kB' 'KReclaimable: 583588 kB' 'Slab: 1462128 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878540 kB' 'KernelStack: 27168 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12688740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.108 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.109 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104793892 kB' 'MemAvailable: 108509896 kB' 'Buffers: 2704 kB' 'Cached: 14735960 kB' 'SwapCached: 0 kB' 'Active: 11599308 kB' 'Inactive: 3693560 kB' 'Active(anon): 11119508 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557748 kB' 'Mapped: 201488 kB' 'Shmem: 10565304 kB' 'KReclaimable: 583588 kB' 'Slab: 1462168 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878580 kB' 'KernelStack: 27120 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12688764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.110 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.111 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.112 nr_hugepages=1024 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.112 resv_hugepages=0 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.112 surplus_hugepages=0 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.112 anon_hugepages=0 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104793532 kB' 'MemAvailable: 108509536 kB' 'Buffers: 2704 kB' 'Cached: 14736000 kB' 'SwapCached: 0 kB' 'Active: 11599012 kB' 'Inactive: 3693560 kB' 'Active(anon): 11119212 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557428 kB' 'Mapped: 201488 kB' 'Shmem: 10565344 kB' 'KReclaimable: 583588 kB' 'Slab: 1462168 kB' 'SReclaimable: 583588 kB' 'SUnreclaim: 878580 kB' 'KernelStack: 27136 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12688784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235976 kB' 'VmallocChunk: 0 kB' 'Percpu: 154368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.112 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.113 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57561688 kB' 'MemUsed: 8097320 kB' 'SwapCached: 0 kB' 'Active: 3157080 kB' 'Inactive: 235936 kB' 'Active(anon): 2917656 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3124948 kB' 'Mapped: 88244 kB' 'AnonPages: 271256 kB' 'Shmem: 2649588 kB' 'KernelStack: 15288 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 275564 kB' 'Slab: 787532 kB' 'SReclaimable: 275564 kB' 'SUnreclaim: 511968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.114 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.115 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.116 node0=1024 expecting 1024 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.116 00:05:43.116 real 0m7.356s 00:05:43.116 user 0m2.807s 00:05:43.116 sys 0m4.646s 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.116 07:11:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:43.116 ************************************ 00:05:43.116 END TEST no_shrink_alloc 00:05:43.116 ************************************ 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:43.116 07:11:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:43.116 00:05:43.116 real 0m26.626s 00:05:43.116 user 0m10.256s 00:05:43.116 sys 0m16.718s 00:05:43.116 07:11:50 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.116 07:11:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:43.116 ************************************ 00:05:43.116 END TEST hugepages 00:05:43.116 ************************************ 00:05:43.116 07:11:50 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:43.116 07:11:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.116 07:11:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.116 07:11:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:43.116 ************************************ 00:05:43.116 START TEST driver 00:05:43.116 ************************************ 00:05:43.116 07:11:50 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:43.377 * Looking for test storage... 00:05:43.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:43.377 07:11:50 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:43.377 07:11:50 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.377 07:11:50 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:48.667 07:11:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:48.667 07:11:55 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.667 07:11:55 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.667 07:11:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:48.667 ************************************ 00:05:48.667 START TEST guess_driver 00:05:48.667 ************************************ 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:48.667 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:48.667 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:48.667 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:48.667 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:48.667 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:48.667 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:48.667 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:48.667 Looking for driver=vfio-pci 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.667 07:11:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:51.972 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.972 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.972 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.972 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.972 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.973 07:11:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:57.319 00:05:57.319 real 0m8.834s 00:05:57.319 user 0m2.920s 00:05:57.319 sys 0m5.119s 00:05:57.319 07:12:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.319 07:12:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:57.319 ************************************ 00:05:57.319 END TEST guess_driver 00:05:57.319 ************************************ 00:05:57.319 00:05:57.319 real 0m13.928s 00:05:57.319 user 0m4.479s 00:05:57.319 sys 0m7.851s 00:05:57.319 07:12:04 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.319 07:12:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:57.319 ************************************ 00:05:57.319 END TEST driver 00:05:57.319 ************************************ 00:05:57.319 07:12:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:57.319 07:12:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.319 07:12:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.319 07:12:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:57.319 ************************************ 00:05:57.319 START TEST devices 00:05:57.319 ************************************ 00:05:57.319 07:12:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:57.319 * Looking for test storage... 00:05:57.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:57.319 07:12:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:57.319 07:12:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:57.319 07:12:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:57.319 07:12:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:01.529 07:12:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:01.529 07:12:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:01.529 07:12:08 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:01.529 No valid GPT data, bailing 00:06:01.529 07:12:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:01.529 07:12:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:01.529 07:12:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:01.529 07:12:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:01.530 07:12:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:01.530 07:12:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:01.530 07:12:08 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:06:01.530 07:12:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:06:01.530 07:12:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:01.530 07:12:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:06:01.530 07:12:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:01.530 07:12:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:01.530 07:12:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:01.530 07:12:08 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.530 07:12:08 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.530 07:12:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:01.530 ************************************ 00:06:01.530 START TEST nvme_mount 00:06:01.530 ************************************ 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:01.530 07:12:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:02.474 Creating new GPT entries in memory. 00:06:02.474 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:02.474 other utilities. 00:06:02.474 07:12:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:02.474 07:12:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:02.474 07:12:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:02.474 07:12:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:02.474 07:12:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:03.860 Creating new GPT entries in memory. 00:06:03.860 The operation has completed successfully. 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4058796 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:03.860 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:03.861 07:12:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:07.165 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:07.426 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:07.426 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:07.687 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:07.687 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:07.687 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:07.687 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:07.687 07:12:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.238 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:10.499 07:12:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:10.759 07:12:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.063 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.064 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.064 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.064 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.064 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:14.064 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:14.324 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:14.324 00:06:14.324 real 0m12.808s 00:06:14.324 user 0m3.641s 00:06:14.324 sys 0m6.971s 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.324 07:12:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:14.324 ************************************ 00:06:14.324 END TEST nvme_mount 00:06:14.324 ************************************ 00:06:14.324 07:12:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:14.325 07:12:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.325 07:12:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.325 07:12:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:14.325 ************************************ 00:06:14.325 START TEST dm_mount 00:06:14.325 ************************************ 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:14.325 07:12:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:15.712 Creating new GPT entries in memory. 00:06:15.712 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:15.712 other utilities. 00:06:15.712 07:12:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:15.712 07:12:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:15.712 07:12:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:15.712 07:12:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:15.712 07:12:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:16.654 Creating new GPT entries in memory. 00:06:16.654 The operation has completed successfully. 00:06:16.654 07:12:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:16.654 07:12:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:16.655 07:12:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:16.655 07:12:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:16.655 07:12:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:17.599 The operation has completed successfully. 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4064173 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.599 07:12:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:20.979 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:21.241 07:12:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:24.546 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.807 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:24.807 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:24.807 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:24.807 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:24.807 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:24.807 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:24.807 07:12:31 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:24.807 07:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:24.807 07:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:24.807 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:24.807 07:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:24.807 07:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:24.807 00:06:24.807 real 0m10.375s 00:06:24.807 user 0m2.629s 00:06:24.807 sys 0m4.674s 00:06:24.807 07:12:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.807 07:12:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:24.807 ************************************ 00:06:24.807 END TEST dm_mount 00:06:24.807 ************************************ 00:06:24.807 07:12:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:24.807 07:12:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:24.807 07:12:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:24.807 07:12:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:24.807 07:12:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:24.807 07:12:32 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:24.807 07:12:32 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:25.068 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:25.068 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:25.068 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:25.068 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:25.068 07:12:32 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:25.068 07:12:32 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:25.068 07:12:32 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:25.068 07:12:32 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:25.068 07:12:32 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:25.068 07:12:32 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:25.068 07:12:32 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:25.068 00:06:25.068 real 0m27.903s 00:06:25.068 user 0m7.997s 00:06:25.068 sys 0m14.491s 00:06:25.068 07:12:32 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.068 07:12:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:25.068 ************************************ 00:06:25.068 END TEST devices 00:06:25.068 ************************************ 00:06:25.068 00:06:25.068 real 1m33.900s 00:06:25.068 user 0m30.914s 00:06:25.068 sys 0m53.810s 00:06:25.068 07:12:32 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.068 07:12:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:25.068 ************************************ 00:06:25.068 END TEST setup.sh 00:06:25.068 ************************************ 00:06:25.329 07:12:32 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:28.630 Hugepages 00:06:28.630 node hugesize free / total 00:06:28.630 node0 1048576kB 0 / 0 00:06:28.630 node0 2048kB 2048 / 2048 00:06:28.630 node1 1048576kB 0 / 0 00:06:28.630 node1 2048kB 0 / 0 00:06:28.630 00:06:28.630 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:28.630 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:28.630 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:28.630 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:28.630 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:28.630 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:28.630 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:28.630 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:28.630 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:28.630 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:28.630 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:28.630 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:28.630 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:28.630 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:28.630 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:28.630 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:28.630 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:28.630 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:28.630 07:12:35 -- spdk/autotest.sh@130 -- # uname -s 00:06:28.630 07:12:35 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:28.630 07:12:35 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:28.630 07:12:35 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:31.934 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:31.934 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:31.934 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:31.934 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:31.934 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:31.934 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:32.195 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:34.108 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:34.369 07:12:41 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:35.312 07:12:42 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:35.312 07:12:42 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:35.312 07:12:42 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:35.312 07:12:42 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:35.312 07:12:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:35.312 07:12:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:35.312 07:12:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:35.312 07:12:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:35.312 07:12:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:35.312 07:12:42 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:35.312 07:12:42 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:35.312 07:12:42 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:38.614 Waiting for block devices as requested 00:06:38.614 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:38.614 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:38.876 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:38.876 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:38.876 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:39.137 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:39.137 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:39.137 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:39.399 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:39.399 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:39.399 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:39.660 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:39.660 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:39.660 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:39.922 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:39.922 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:39.922 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:40.205 07:12:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:40.205 07:12:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:06:40.205 07:12:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:40.205 07:12:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:40.205 07:12:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:40.205 07:12:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:40.205 07:12:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:06:40.205 07:12:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:40.205 07:12:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:40.205 07:12:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:40.205 07:12:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:40.205 07:12:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:40.205 07:12:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:40.205 07:12:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:40.205 07:12:47 -- common/autotest_common.sh@1557 -- # continue 00:06:40.205 07:12:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:40.205 07:12:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.205 07:12:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.205 07:12:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:40.205 07:12:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.205 07:12:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.205 07:12:47 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:43.518 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:43.518 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:43.518 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:43.518 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:43.778 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:43.778 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:43.778 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:43.778 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:43.778 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:43.779 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:44.039 07:12:51 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:44.039 07:12:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.039 07:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.300 07:12:51 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:44.300 07:12:51 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:44.300 07:12:51 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:44.300 07:12:51 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:44.300 07:12:51 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:44.300 07:12:51 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:44.300 07:12:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:44.300 07:12:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:44.300 07:12:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:44.300 07:12:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:44.300 07:12:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:44.300 07:12:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:44.300 07:12:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:44.300 07:12:51 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:44.300 07:12:51 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:44.300 07:12:51 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:06:44.300 07:12:51 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:44.300 07:12:51 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:44.300 07:12:51 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:44.300 07:12:51 -- common/autotest_common.sh@1593 -- # return 0 00:06:44.300 07:12:51 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:44.300 07:12:51 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:44.300 07:12:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:44.300 07:12:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:44.300 07:12:51 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:44.300 07:12:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.300 07:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.300 07:12:51 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:44.300 07:12:51 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:44.300 07:12:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.300 07:12:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.300 07:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.300 ************************************ 00:06:44.300 START TEST env 00:06:44.300 ************************************ 00:06:44.300 07:12:51 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:44.561 * Looking for test storage... 00:06:44.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:44.561 07:12:51 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:44.561 07:12:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.561 07:12:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.561 07:12:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.561 ************************************ 00:06:44.561 START TEST env_memory 00:06:44.561 ************************************ 00:06:44.561 07:12:51 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:44.561 00:06:44.561 00:06:44.561 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.561 http://cunit.sourceforge.net/ 00:06:44.561 00:06:44.561 00:06:44.561 Suite: memory 00:06:44.561 Test: alloc and free memory map ...[2024-07-25 07:12:51.783457] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:44.561 passed 00:06:44.561 Test: mem map translation ...[2024-07-25 07:12:51.809081] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:44.561 [2024-07-25 07:12:51.809111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:44.561 [2024-07-25 07:12:51.809157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:44.561 [2024-07-25 07:12:51.809165] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:44.561 passed 00:06:44.561 Test: mem map registration ...[2024-07-25 07:12:51.864525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:44.561 [2024-07-25 07:12:51.864554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:44.561 passed 00:06:44.823 Test: mem map adjacent registrations ...passed 00:06:44.823 00:06:44.823 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.823 suites 1 1 n/a 0 0 00:06:44.823 tests 4 4 4 0 0 00:06:44.823 asserts 152 152 152 0 n/a 00:06:44.823 00:06:44.823 Elapsed time = 0.193 seconds 00:06:44.823 00:06:44.823 real 0m0.209s 00:06:44.823 user 0m0.196s 00:06:44.823 sys 0m0.011s 00:06:44.823 07:12:51 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.823 07:12:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:44.823 ************************************ 00:06:44.823 END TEST env_memory 00:06:44.823 ************************************ 00:06:44.823 07:12:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:44.823 07:12:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.823 07:12:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.823 07:12:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.823 ************************************ 00:06:44.823 START TEST env_vtophys 00:06:44.823 ************************************ 00:06:44.823 07:12:52 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:44.823 EAL: lib.eal log level changed from notice to debug 00:06:44.823 EAL: Detected lcore 0 as core 0 on socket 0 00:06:44.823 EAL: Detected lcore 1 as core 1 on socket 0 00:06:44.823 EAL: Detected lcore 2 as core 2 on socket 0 00:06:44.823 EAL: Detected lcore 3 as core 3 on socket 0 00:06:44.823 EAL: Detected lcore 4 as core 4 on socket 0 00:06:44.823 EAL: Detected lcore 5 as core 5 on socket 0 00:06:44.823 EAL: Detected lcore 6 as core 6 on socket 0 00:06:44.823 EAL: Detected lcore 7 as core 7 on socket 0 00:06:44.823 EAL: Detected lcore 8 as core 8 on socket 0 00:06:44.823 EAL: Detected lcore 9 as core 9 on socket 0 00:06:44.823 EAL: Detected lcore 10 as core 10 on socket 0 00:06:44.823 EAL: Detected lcore 11 as core 11 on socket 0 00:06:44.823 EAL: Detected lcore 12 as core 12 on socket 0 00:06:44.823 EAL: Detected lcore 13 as core 13 on socket 0 00:06:44.823 EAL: Detected lcore 14 as core 14 on socket 0 00:06:44.823 EAL: Detected lcore 15 as core 15 on socket 0 00:06:44.823 EAL: Detected lcore 16 as core 16 on socket 0 00:06:44.823 EAL: Detected lcore 17 as core 17 on socket 0 00:06:44.823 EAL: Detected lcore 18 as core 18 on socket 0 00:06:44.823 EAL: Detected lcore 19 as core 19 on socket 0 00:06:44.823 EAL: Detected lcore 20 as core 20 on socket 0 00:06:44.823 EAL: Detected lcore 21 as core 21 on socket 0 00:06:44.823 EAL: Detected lcore 22 as core 22 on socket 0 00:06:44.823 EAL: Detected lcore 23 as core 23 on socket 0 00:06:44.823 EAL: Detected lcore 24 as core 24 on socket 0 00:06:44.823 EAL: Detected lcore 25 as core 25 on socket 0 00:06:44.823 EAL: Detected lcore 26 as core 26 on socket 0 00:06:44.823 EAL: Detected lcore 27 as core 27 on socket 0 00:06:44.823 EAL: Detected lcore 28 as core 28 on socket 0 00:06:44.823 EAL: Detected lcore 29 as core 29 on socket 0 00:06:44.823 EAL: Detected lcore 30 as core 30 on socket 0 00:06:44.823 EAL: Detected lcore 31 as core 31 on socket 0 00:06:44.823 EAL: Detected lcore 32 as core 32 on socket 0 00:06:44.823 EAL: Detected lcore 33 as core 33 on socket 0 00:06:44.823 EAL: Detected lcore 34 as core 34 on socket 0 00:06:44.823 EAL: Detected lcore 35 as core 35 on socket 0 00:06:44.823 EAL: Detected lcore 36 as core 0 on socket 1 00:06:44.823 EAL: Detected lcore 37 as core 1 on socket 1 00:06:44.823 EAL: Detected lcore 38 as core 2 on socket 1 00:06:44.823 EAL: Detected lcore 39 as core 3 on socket 1 00:06:44.823 EAL: Detected lcore 40 as core 4 on socket 1 00:06:44.823 EAL: Detected lcore 41 as core 5 on socket 1 00:06:44.823 EAL: Detected lcore 42 as core 6 on socket 1 00:06:44.823 EAL: Detected lcore 43 as core 7 on socket 1 00:06:44.823 EAL: Detected lcore 44 as core 8 on socket 1 00:06:44.823 EAL: Detected lcore 45 as core 9 on socket 1 00:06:44.823 EAL: Detected lcore 46 as core 10 on socket 1 00:06:44.823 EAL: Detected lcore 47 as core 11 on socket 1 00:06:44.823 EAL: Detected lcore 48 as core 12 on socket 1 00:06:44.823 EAL: Detected lcore 49 as core 13 on socket 1 00:06:44.823 EAL: Detected lcore 50 as core 14 on socket 1 00:06:44.823 EAL: Detected lcore 51 as core 15 on socket 1 00:06:44.823 EAL: Detected lcore 52 as core 16 on socket 1 00:06:44.823 EAL: Detected lcore 53 as core 17 on socket 1 00:06:44.823 EAL: Detected lcore 54 as core 18 on socket 1 00:06:44.823 EAL: Detected lcore 55 as core 19 on socket 1 00:06:44.823 EAL: Detected lcore 56 as core 20 on socket 1 00:06:44.823 EAL: Detected lcore 57 as core 21 on socket 1 00:06:44.823 EAL: Detected lcore 58 as core 22 on socket 1 00:06:44.823 EAL: Detected lcore 59 as core 23 on socket 1 00:06:44.823 EAL: Detected lcore 60 as core 24 on socket 1 00:06:44.823 EAL: Detected lcore 61 as core 25 on socket 1 00:06:44.823 EAL: Detected lcore 62 as core 26 on socket 1 00:06:44.823 EAL: Detected lcore 63 as core 27 on socket 1 00:06:44.823 EAL: Detected lcore 64 as core 28 on socket 1 00:06:44.823 EAL: Detected lcore 65 as core 29 on socket 1 00:06:44.823 EAL: Detected lcore 66 as core 30 on socket 1 00:06:44.823 EAL: Detected lcore 67 as core 31 on socket 1 00:06:44.823 EAL: Detected lcore 68 as core 32 on socket 1 00:06:44.823 EAL: Detected lcore 69 as core 33 on socket 1 00:06:44.823 EAL: Detected lcore 70 as core 34 on socket 1 00:06:44.823 EAL: Detected lcore 71 as core 35 on socket 1 00:06:44.823 EAL: Detected lcore 72 as core 0 on socket 0 00:06:44.823 EAL: Detected lcore 73 as core 1 on socket 0 00:06:44.823 EAL: Detected lcore 74 as core 2 on socket 0 00:06:44.823 EAL: Detected lcore 75 as core 3 on socket 0 00:06:44.823 EAL: Detected lcore 76 as core 4 on socket 0 00:06:44.823 EAL: Detected lcore 77 as core 5 on socket 0 00:06:44.823 EAL: Detected lcore 78 as core 6 on socket 0 00:06:44.823 EAL: Detected lcore 79 as core 7 on socket 0 00:06:44.823 EAL: Detected lcore 80 as core 8 on socket 0 00:06:44.823 EAL: Detected lcore 81 as core 9 on socket 0 00:06:44.823 EAL: Detected lcore 82 as core 10 on socket 0 00:06:44.823 EAL: Detected lcore 83 as core 11 on socket 0 00:06:44.823 EAL: Detected lcore 84 as core 12 on socket 0 00:06:44.823 EAL: Detected lcore 85 as core 13 on socket 0 00:06:44.823 EAL: Detected lcore 86 as core 14 on socket 0 00:06:44.823 EAL: Detected lcore 87 as core 15 on socket 0 00:06:44.823 EAL: Detected lcore 88 as core 16 on socket 0 00:06:44.823 EAL: Detected lcore 89 as core 17 on socket 0 00:06:44.823 EAL: Detected lcore 90 as core 18 on socket 0 00:06:44.823 EAL: Detected lcore 91 as core 19 on socket 0 00:06:44.823 EAL: Detected lcore 92 as core 20 on socket 0 00:06:44.823 EAL: Detected lcore 93 as core 21 on socket 0 00:06:44.823 EAL: Detected lcore 94 as core 22 on socket 0 00:06:44.823 EAL: Detected lcore 95 as core 23 on socket 0 00:06:44.823 EAL: Detected lcore 96 as core 24 on socket 0 00:06:44.823 EAL: Detected lcore 97 as core 25 on socket 0 00:06:44.823 EAL: Detected lcore 98 as core 26 on socket 0 00:06:44.823 EAL: Detected lcore 99 as core 27 on socket 0 00:06:44.823 EAL: Detected lcore 100 as core 28 on socket 0 00:06:44.823 EAL: Detected lcore 101 as core 29 on socket 0 00:06:44.823 EAL: Detected lcore 102 as core 30 on socket 0 00:06:44.823 EAL: Detected lcore 103 as core 31 on socket 0 00:06:44.823 EAL: Detected lcore 104 as core 32 on socket 0 00:06:44.823 EAL: Detected lcore 105 as core 33 on socket 0 00:06:44.823 EAL: Detected lcore 106 as core 34 on socket 0 00:06:44.823 EAL: Detected lcore 107 as core 35 on socket 0 00:06:44.823 EAL: Detected lcore 108 as core 0 on socket 1 00:06:44.823 EAL: Detected lcore 109 as core 1 on socket 1 00:06:44.824 EAL: Detected lcore 110 as core 2 on socket 1 00:06:44.824 EAL: Detected lcore 111 as core 3 on socket 1 00:06:44.824 EAL: Detected lcore 112 as core 4 on socket 1 00:06:44.824 EAL: Detected lcore 113 as core 5 on socket 1 00:06:44.824 EAL: Detected lcore 114 as core 6 on socket 1 00:06:44.824 EAL: Detected lcore 115 as core 7 on socket 1 00:06:44.824 EAL: Detected lcore 116 as core 8 on socket 1 00:06:44.824 EAL: Detected lcore 117 as core 9 on socket 1 00:06:44.824 EAL: Detected lcore 118 as core 10 on socket 1 00:06:44.824 EAL: Detected lcore 119 as core 11 on socket 1 00:06:44.824 EAL: Detected lcore 120 as core 12 on socket 1 00:06:44.824 EAL: Detected lcore 121 as core 13 on socket 1 00:06:44.824 EAL: Detected lcore 122 as core 14 on socket 1 00:06:44.824 EAL: Detected lcore 123 as core 15 on socket 1 00:06:44.824 EAL: Detected lcore 124 as core 16 on socket 1 00:06:44.824 EAL: Detected lcore 125 as core 17 on socket 1 00:06:44.824 EAL: Detected lcore 126 as core 18 on socket 1 00:06:44.824 EAL: Detected lcore 127 as core 19 on socket 1 00:06:44.824 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:44.824 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:44.824 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:44.824 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:44.824 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:44.824 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:44.824 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:44.824 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:44.824 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:44.824 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:44.824 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:44.824 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:44.824 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:44.824 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:44.824 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:44.824 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:44.824 EAL: Maximum logical cores by configuration: 128 00:06:44.824 EAL: Detected CPU lcores: 128 00:06:44.824 EAL: Detected NUMA nodes: 2 00:06:44.824 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:44.824 EAL: Detected shared linkage of DPDK 00:06:44.824 EAL: No shared files mode enabled, IPC will be disabled 00:06:44.824 EAL: Bus pci wants IOVA as 'DC' 00:06:44.824 EAL: Buses did not request a specific IOVA mode. 00:06:44.824 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:44.824 EAL: Selected IOVA mode 'VA' 00:06:44.824 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.824 EAL: Probing VFIO support... 00:06:44.824 EAL: IOMMU type 1 (Type 1) is supported 00:06:44.824 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:44.824 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:44.824 EAL: VFIO support initialized 00:06:44.824 EAL: Ask a virtual area of 0x2e000 bytes 00:06:44.824 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:44.824 EAL: Setting up physically contiguous memory... 00:06:44.824 EAL: Setting maximum number of open files to 524288 00:06:44.824 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:44.824 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:44.824 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:44.824 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:44.824 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.824 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:44.824 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.824 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.824 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:44.824 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:44.824 EAL: Hugepages will be freed exactly as allocated. 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: TSC frequency is ~2400000 KHz 00:06:44.824 EAL: Main lcore 0 is ready (tid=7fa472af2a00;cpuset=[0]) 00:06:44.824 EAL: Trying to obtain current memory policy. 00:06:44.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.824 EAL: Restoring previous memory policy: 0 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was expanded by 2MB 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:44.824 EAL: Mem event callback 'spdk:(nil)' registered 00:06:44.824 00:06:44.824 00:06:44.824 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.824 http://cunit.sourceforge.net/ 00:06:44.824 00:06:44.824 00:06:44.824 Suite: components_suite 00:06:44.824 Test: vtophys_malloc_test ...passed 00:06:44.824 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:44.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.824 EAL: Restoring previous memory policy: 4 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was expanded by 4MB 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was shrunk by 4MB 00:06:44.824 EAL: Trying to obtain current memory policy. 00:06:44.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.824 EAL: Restoring previous memory policy: 4 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was expanded by 6MB 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was shrunk by 6MB 00:06:44.824 EAL: Trying to obtain current memory policy. 00:06:44.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.824 EAL: Restoring previous memory policy: 4 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was expanded by 10MB 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was shrunk by 10MB 00:06:44.824 EAL: Trying to obtain current memory policy. 00:06:44.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.824 EAL: Restoring previous memory policy: 4 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was expanded by 18MB 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was shrunk by 18MB 00:06:44.824 EAL: Trying to obtain current memory policy. 00:06:44.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.824 EAL: Restoring previous memory policy: 4 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was expanded by 34MB 00:06:44.824 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.824 EAL: request: mp_malloc_sync 00:06:44.824 EAL: No shared files mode enabled, IPC is disabled 00:06:44.824 EAL: Heap on socket 0 was shrunk by 34MB 00:06:44.825 EAL: Trying to obtain current memory policy. 00:06:44.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.825 EAL: Restoring previous memory policy: 4 00:06:44.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.825 EAL: request: mp_malloc_sync 00:06:44.825 EAL: No shared files mode enabled, IPC is disabled 00:06:44.825 EAL: Heap on socket 0 was expanded by 66MB 00:06:44.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.825 EAL: request: mp_malloc_sync 00:06:44.825 EAL: No shared files mode enabled, IPC is disabled 00:06:44.825 EAL: Heap on socket 0 was shrunk by 66MB 00:06:44.825 EAL: Trying to obtain current memory policy. 00:06:44.825 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.825 EAL: Restoring previous memory policy: 4 00:06:44.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.825 EAL: request: mp_malloc_sync 00:06:44.825 EAL: No shared files mode enabled, IPC is disabled 00:06:44.825 EAL: Heap on socket 0 was expanded by 130MB 00:06:44.825 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.084 EAL: request: mp_malloc_sync 00:06:45.084 EAL: No shared files mode enabled, IPC is disabled 00:06:45.084 EAL: Heap on socket 0 was shrunk by 130MB 00:06:45.084 EAL: Trying to obtain current memory policy. 00:06:45.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.084 EAL: Restoring previous memory policy: 4 00:06:45.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.084 EAL: request: mp_malloc_sync 00:06:45.084 EAL: No shared files mode enabled, IPC is disabled 00:06:45.084 EAL: Heap on socket 0 was expanded by 258MB 00:06:45.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.084 EAL: request: mp_malloc_sync 00:06:45.084 EAL: No shared files mode enabled, IPC is disabled 00:06:45.084 EAL: Heap on socket 0 was shrunk by 258MB 00:06:45.084 EAL: Trying to obtain current memory policy. 00:06:45.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.084 EAL: Restoring previous memory policy: 4 00:06:45.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.084 EAL: request: mp_malloc_sync 00:06:45.084 EAL: No shared files mode enabled, IPC is disabled 00:06:45.084 EAL: Heap on socket 0 was expanded by 514MB 00:06:45.084 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.343 EAL: request: mp_malloc_sync 00:06:45.343 EAL: No shared files mode enabled, IPC is disabled 00:06:45.343 EAL: Heap on socket 0 was shrunk by 514MB 00:06:45.343 EAL: Trying to obtain current memory policy. 00:06:45.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.343 EAL: Restoring previous memory policy: 4 00:06:45.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.343 EAL: request: mp_malloc_sync 00:06:45.343 EAL: No shared files mode enabled, IPC is disabled 00:06:45.343 EAL: Heap on socket 0 was expanded by 1026MB 00:06:45.343 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.605 EAL: request: mp_malloc_sync 00:06:45.605 EAL: No shared files mode enabled, IPC is disabled 00:06:45.605 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:45.605 passed 00:06:45.605 00:06:45.605 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.605 suites 1 1 n/a 0 0 00:06:45.605 tests 2 2 2 0 0 00:06:45.605 asserts 497 497 497 0 n/a 00:06:45.605 00:06:45.605 Elapsed time = 0.655 seconds 00:06:45.605 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.605 EAL: request: mp_malloc_sync 00:06:45.605 EAL: No shared files mode enabled, IPC is disabled 00:06:45.605 EAL: Heap on socket 0 was shrunk by 2MB 00:06:45.605 EAL: No shared files mode enabled, IPC is disabled 00:06:45.605 EAL: No shared files mode enabled, IPC is disabled 00:06:45.605 EAL: No shared files mode enabled, IPC is disabled 00:06:45.605 00:06:45.605 real 0m0.776s 00:06:45.605 user 0m0.413s 00:06:45.605 sys 0m0.337s 00:06:45.605 07:12:52 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.605 07:12:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:45.605 ************************************ 00:06:45.605 END TEST env_vtophys 00:06:45.605 ************************************ 00:06:45.605 07:12:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:45.605 07:12:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.605 07:12:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.605 07:12:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.605 ************************************ 00:06:45.605 START TEST env_pci 00:06:45.605 ************************************ 00:06:45.605 07:12:52 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:45.605 00:06:45.605 00:06:45.605 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.605 http://cunit.sourceforge.net/ 00:06:45.605 00:06:45.605 00:06:45.605 Suite: pci 00:06:45.605 Test: pci_hook ...[2024-07-25 07:12:52.893319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4075229 has claimed it 00:06:45.605 EAL: Cannot find device (10000:00:01.0) 00:06:45.605 EAL: Failed to attach device on primary process 00:06:45.605 passed 00:06:45.605 00:06:45.605 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.605 suites 1 1 n/a 0 0 00:06:45.605 tests 1 1 1 0 0 00:06:45.605 asserts 25 25 25 0 n/a 00:06:45.605 00:06:45.605 Elapsed time = 0.036 seconds 00:06:45.605 00:06:45.605 real 0m0.058s 00:06:45.605 user 0m0.018s 00:06:45.605 sys 0m0.039s 00:06:45.605 07:12:52 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.605 07:12:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:45.605 ************************************ 00:06:45.605 END TEST env_pci 00:06:45.605 ************************************ 00:06:45.605 07:12:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:45.605 07:12:52 env -- env/env.sh@15 -- # uname 00:06:45.866 07:12:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:45.866 07:12:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:45.866 07:12:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.866 07:12:52 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:45.866 07:12:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.866 07:12:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.866 ************************************ 00:06:45.866 START TEST env_dpdk_post_init 00:06:45.866 ************************************ 00:06:45.866 07:12:53 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.866 EAL: Detected CPU lcores: 128 00:06:45.866 EAL: Detected NUMA nodes: 2 00:06:45.866 EAL: Detected shared linkage of DPDK 00:06:45.866 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.866 EAL: Selected IOVA mode 'VA' 00:06:45.866 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.866 EAL: VFIO support initialized 00:06:45.866 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.866 EAL: Using IOMMU type 1 (Type 1) 00:06:46.126 EAL: Ignore mapping IO port bar(1) 00:06:46.126 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:46.126 EAL: Ignore mapping IO port bar(1) 00:06:46.386 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:46.386 EAL: Ignore mapping IO port bar(1) 00:06:46.646 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:46.646 EAL: Ignore mapping IO port bar(1) 00:06:46.906 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:46.906 EAL: Ignore mapping IO port bar(1) 00:06:46.906 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:47.166 EAL: Ignore mapping IO port bar(1) 00:06:47.166 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:47.426 EAL: Ignore mapping IO port bar(1) 00:06:47.426 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:47.686 EAL: Ignore mapping IO port bar(1) 00:06:47.686 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:47.947 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:47.947 EAL: Ignore mapping IO port bar(1) 00:06:48.207 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:48.207 EAL: Ignore mapping IO port bar(1) 00:06:48.467 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:48.467 EAL: Ignore mapping IO port bar(1) 00:06:48.467 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:48.727 EAL: Ignore mapping IO port bar(1) 00:06:48.727 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:48.988 EAL: Ignore mapping IO port bar(1) 00:06:48.988 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:49.248 EAL: Ignore mapping IO port bar(1) 00:06:49.248 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:49.249 EAL: Ignore mapping IO port bar(1) 00:06:49.510 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:49.510 EAL: Ignore mapping IO port bar(1) 00:06:49.771 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:49.771 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:49.771 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:49.771 Starting DPDK initialization... 00:06:49.771 Starting SPDK post initialization... 00:06:49.771 SPDK NVMe probe 00:06:49.771 Attaching to 0000:65:00.0 00:06:49.771 Attached to 0000:65:00.0 00:06:49.771 Cleaning up... 00:06:51.687 00:06:51.687 real 0m5.711s 00:06:51.687 user 0m0.183s 00:06:51.687 sys 0m0.075s 00:06:51.687 07:12:58 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.687 07:12:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:51.687 ************************************ 00:06:51.687 END TEST env_dpdk_post_init 00:06:51.687 ************************************ 00:06:51.687 07:12:58 env -- env/env.sh@26 -- # uname 00:06:51.687 07:12:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:51.687 07:12:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:51.687 07:12:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.687 07:12:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.687 07:12:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:51.687 ************************************ 00:06:51.687 START TEST env_mem_callbacks 00:06:51.687 ************************************ 00:06:51.687 07:12:58 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:51.687 EAL: Detected CPU lcores: 128 00:06:51.687 EAL: Detected NUMA nodes: 2 00:06:51.687 EAL: Detected shared linkage of DPDK 00:06:51.687 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:51.687 EAL: Selected IOVA mode 'VA' 00:06:51.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.687 EAL: VFIO support initialized 00:06:51.687 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:51.687 00:06:51.687 00:06:51.687 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.687 http://cunit.sourceforge.net/ 00:06:51.687 00:06:51.687 00:06:51.687 Suite: memory 00:06:51.687 Test: test ... 00:06:51.687 register 0x200000200000 2097152 00:06:51.687 malloc 3145728 00:06:51.687 register 0x200000400000 4194304 00:06:51.687 buf 0x200000500000 len 3145728 PASSED 00:06:51.687 malloc 64 00:06:51.687 buf 0x2000004fff40 len 64 PASSED 00:06:51.687 malloc 4194304 00:06:51.687 register 0x200000800000 6291456 00:06:51.687 buf 0x200000a00000 len 4194304 PASSED 00:06:51.687 free 0x200000500000 3145728 00:06:51.687 free 0x2000004fff40 64 00:06:51.687 unregister 0x200000400000 4194304 PASSED 00:06:51.687 free 0x200000a00000 4194304 00:06:51.687 unregister 0x200000800000 6291456 PASSED 00:06:51.687 malloc 8388608 00:06:51.687 register 0x200000400000 10485760 00:06:51.687 buf 0x200000600000 len 8388608 PASSED 00:06:51.687 free 0x200000600000 8388608 00:06:51.687 unregister 0x200000400000 10485760 PASSED 00:06:51.687 passed 00:06:51.687 00:06:51.687 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.687 suites 1 1 n/a 0 0 00:06:51.687 tests 1 1 1 0 0 00:06:51.687 asserts 15 15 15 0 n/a 00:06:51.687 00:06:51.687 Elapsed time = 0.005 seconds 00:06:51.687 00:06:51.687 real 0m0.058s 00:06:51.687 user 0m0.025s 00:06:51.687 sys 0m0.032s 00:06:51.687 07:12:58 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.687 07:12:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:51.687 ************************************ 00:06:51.687 END TEST env_mem_callbacks 00:06:51.687 ************************************ 00:06:51.687 00:06:51.687 real 0m7.318s 00:06:51.687 user 0m1.027s 00:06:51.687 sys 0m0.841s 00:06:51.687 07:12:58 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.687 07:12:58 env -- common/autotest_common.sh@10 -- # set +x 00:06:51.688 ************************************ 00:06:51.688 END TEST env 00:06:51.688 ************************************ 00:06:51.688 07:12:58 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:51.688 07:12:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.688 07:12:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.688 07:12:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.688 ************************************ 00:06:51.688 START TEST rpc 00:06:51.688 ************************************ 00:06:51.688 07:12:58 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:51.949 * Looking for test storage... 00:06:51.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:51.949 07:12:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4076671 00:06:51.949 07:12:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.949 07:12:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:51.949 07:12:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4076671 00:06:51.949 07:12:59 rpc -- common/autotest_common.sh@831 -- # '[' -z 4076671 ']' 00:06:51.949 07:12:59 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.949 07:12:59 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.949 07:12:59 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.949 07:12:59 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.949 07:12:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.949 [2024-07-25 07:12:59.153356] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:06:51.949 [2024-07-25 07:12:59.153429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4076671 ] 00:06:51.949 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.949 [2024-07-25 07:12:59.219915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.949 [2024-07-25 07:12:59.293729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:51.949 [2024-07-25 07:12:59.293769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4076671' to capture a snapshot of events at runtime. 00:06:51.949 [2024-07-25 07:12:59.293777] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.949 [2024-07-25 07:12:59.293784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.949 [2024-07-25 07:12:59.293789] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4076671 for offline analysis/debug. 00:06:51.949 [2024-07-25 07:12:59.293813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.891 07:12:59 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.891 07:12:59 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.891 07:12:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:52.891 07:12:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:52.891 07:12:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:52.891 07:12:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:52.891 07:12:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.891 07:12:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.891 07:12:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.891 ************************************ 00:06:52.891 START TEST rpc_integrity 00:06:52.891 ************************************ 00:06:52.891 07:12:59 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:52.891 07:12:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:52.891 07:12:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.891 07:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.891 07:12:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.891 07:12:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:52.891 07:12:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:52.891 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:52.891 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:52.891 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.891 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.891 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.891 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:52.891 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:52.891 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.891 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.891 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.891 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:52.891 { 00:06:52.891 "name": "Malloc0", 00:06:52.891 "aliases": [ 00:06:52.891 "79860970-e859-409c-8f54-434786c4a7bc" 00:06:52.891 ], 00:06:52.891 "product_name": "Malloc disk", 00:06:52.891 "block_size": 512, 00:06:52.891 "num_blocks": 16384, 00:06:52.891 "uuid": "79860970-e859-409c-8f54-434786c4a7bc", 00:06:52.891 "assigned_rate_limits": { 00:06:52.891 "rw_ios_per_sec": 0, 00:06:52.891 "rw_mbytes_per_sec": 0, 00:06:52.891 "r_mbytes_per_sec": 0, 00:06:52.891 "w_mbytes_per_sec": 0 00:06:52.891 }, 00:06:52.891 "claimed": false, 00:06:52.891 "zoned": false, 00:06:52.891 "supported_io_types": { 00:06:52.891 "read": true, 00:06:52.891 "write": true, 00:06:52.891 "unmap": true, 00:06:52.891 "flush": true, 00:06:52.892 "reset": true, 00:06:52.892 "nvme_admin": false, 00:06:52.892 "nvme_io": false, 00:06:52.892 "nvme_io_md": false, 00:06:52.892 "write_zeroes": true, 00:06:52.892 "zcopy": true, 00:06:52.892 "get_zone_info": false, 00:06:52.892 "zone_management": false, 00:06:52.892 "zone_append": false, 00:06:52.892 "compare": false, 00:06:52.892 "compare_and_write": false, 00:06:52.892 "abort": true, 00:06:52.892 "seek_hole": false, 00:06:52.892 "seek_data": false, 00:06:52.892 "copy": true, 00:06:52.892 "nvme_iov_md": false 00:06:52.892 }, 00:06:52.892 "memory_domains": [ 00:06:52.892 { 00:06:52.892 "dma_device_id": "system", 00:06:52.892 "dma_device_type": 1 00:06:52.892 }, 00:06:52.892 { 00:06:52.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.892 "dma_device_type": 2 00:06:52.892 } 00:06:52.892 ], 00:06:52.892 "driver_specific": {} 00:06:52.892 } 00:06:52.892 ]' 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.892 [2024-07-25 07:13:00.111532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:52.892 [2024-07-25 07:13:00.111569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:52.892 [2024-07-25 07:13:00.111582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9e6d10 00:06:52.892 [2024-07-25 07:13:00.111589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:52.892 [2024-07-25 07:13:00.112984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:52.892 [2024-07-25 07:13:00.113005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:52.892 Passthru0 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:52.892 { 00:06:52.892 "name": "Malloc0", 00:06:52.892 "aliases": [ 00:06:52.892 "79860970-e859-409c-8f54-434786c4a7bc" 00:06:52.892 ], 00:06:52.892 "product_name": "Malloc disk", 00:06:52.892 "block_size": 512, 00:06:52.892 "num_blocks": 16384, 00:06:52.892 "uuid": "79860970-e859-409c-8f54-434786c4a7bc", 00:06:52.892 "assigned_rate_limits": { 00:06:52.892 "rw_ios_per_sec": 0, 00:06:52.892 "rw_mbytes_per_sec": 0, 00:06:52.892 "r_mbytes_per_sec": 0, 00:06:52.892 "w_mbytes_per_sec": 0 00:06:52.892 }, 00:06:52.892 "claimed": true, 00:06:52.892 "claim_type": "exclusive_write", 00:06:52.892 "zoned": false, 00:06:52.892 "supported_io_types": { 00:06:52.892 "read": true, 00:06:52.892 "write": true, 00:06:52.892 "unmap": true, 00:06:52.892 "flush": true, 00:06:52.892 "reset": true, 00:06:52.892 "nvme_admin": false, 00:06:52.892 "nvme_io": false, 00:06:52.892 "nvme_io_md": false, 00:06:52.892 "write_zeroes": true, 00:06:52.892 "zcopy": true, 00:06:52.892 "get_zone_info": false, 00:06:52.892 "zone_management": false, 00:06:52.892 "zone_append": false, 00:06:52.892 "compare": false, 00:06:52.892 "compare_and_write": false, 00:06:52.892 "abort": true, 00:06:52.892 "seek_hole": false, 00:06:52.892 "seek_data": false, 00:06:52.892 "copy": true, 00:06:52.892 "nvme_iov_md": false 00:06:52.892 }, 00:06:52.892 "memory_domains": [ 00:06:52.892 { 00:06:52.892 "dma_device_id": "system", 00:06:52.892 "dma_device_type": 1 00:06:52.892 }, 00:06:52.892 { 00:06:52.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.892 "dma_device_type": 2 00:06:52.892 } 00:06:52.892 ], 00:06:52.892 "driver_specific": {} 00:06:52.892 }, 00:06:52.892 { 00:06:52.892 "name": "Passthru0", 00:06:52.892 "aliases": [ 00:06:52.892 "6c908b1c-48f9-5f86-afb9-4d3b07a7baad" 00:06:52.892 ], 00:06:52.892 "product_name": "passthru", 00:06:52.892 "block_size": 512, 00:06:52.892 "num_blocks": 16384, 00:06:52.892 "uuid": "6c908b1c-48f9-5f86-afb9-4d3b07a7baad", 00:06:52.892 "assigned_rate_limits": { 00:06:52.892 "rw_ios_per_sec": 0, 00:06:52.892 "rw_mbytes_per_sec": 0, 00:06:52.892 "r_mbytes_per_sec": 0, 00:06:52.892 "w_mbytes_per_sec": 0 00:06:52.892 }, 00:06:52.892 "claimed": false, 00:06:52.892 "zoned": false, 00:06:52.892 "supported_io_types": { 00:06:52.892 "read": true, 00:06:52.892 "write": true, 00:06:52.892 "unmap": true, 00:06:52.892 "flush": true, 00:06:52.892 "reset": true, 00:06:52.892 "nvme_admin": false, 00:06:52.892 "nvme_io": false, 00:06:52.892 "nvme_io_md": false, 00:06:52.892 "write_zeroes": true, 00:06:52.892 "zcopy": true, 00:06:52.892 "get_zone_info": false, 00:06:52.892 "zone_management": false, 00:06:52.892 "zone_append": false, 00:06:52.892 "compare": false, 00:06:52.892 "compare_and_write": false, 00:06:52.892 "abort": true, 00:06:52.892 "seek_hole": false, 00:06:52.892 "seek_data": false, 00:06:52.892 "copy": true, 00:06:52.892 "nvme_iov_md": false 00:06:52.892 }, 00:06:52.892 "memory_domains": [ 00:06:52.892 { 00:06:52.892 "dma_device_id": "system", 00:06:52.892 "dma_device_type": 1 00:06:52.892 }, 00:06:52.892 { 00:06:52.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.892 "dma_device_type": 2 00:06:52.892 } 00:06:52.892 ], 00:06:52.892 "driver_specific": { 00:06:52.892 "passthru": { 00:06:52.892 "name": "Passthru0", 00:06:52.892 "base_bdev_name": "Malloc0" 00:06:52.892 } 00:06:52.892 } 00:06:52.892 } 00:06:52.892 ]' 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:52.892 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.892 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.893 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:52.893 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.893 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.893 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.893 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:52.893 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.893 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:52.893 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.893 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:52.893 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:53.154 07:13:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:53.154 00:06:53.154 real 0m0.308s 00:06:53.154 user 0m0.191s 00:06:53.154 sys 0m0.046s 00:06:53.154 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.154 07:13:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.154 ************************************ 00:06:53.154 END TEST rpc_integrity 00:06:53.154 ************************************ 00:06:53.154 07:13:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:53.154 07:13:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.154 07:13:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.154 07:13:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.154 ************************************ 00:06:53.154 START TEST rpc_plugins 00:06:53.154 ************************************ 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:53.154 { 00:06:53.154 "name": "Malloc1", 00:06:53.154 "aliases": [ 00:06:53.154 "d8d5332f-9d77-48ae-8745-94d8af90942c" 00:06:53.154 ], 00:06:53.154 "product_name": "Malloc disk", 00:06:53.154 "block_size": 4096, 00:06:53.154 "num_blocks": 256, 00:06:53.154 "uuid": "d8d5332f-9d77-48ae-8745-94d8af90942c", 00:06:53.154 "assigned_rate_limits": { 00:06:53.154 "rw_ios_per_sec": 0, 00:06:53.154 "rw_mbytes_per_sec": 0, 00:06:53.154 "r_mbytes_per_sec": 0, 00:06:53.154 "w_mbytes_per_sec": 0 00:06:53.154 }, 00:06:53.154 "claimed": false, 00:06:53.154 "zoned": false, 00:06:53.154 "supported_io_types": { 00:06:53.154 "read": true, 00:06:53.154 "write": true, 00:06:53.154 "unmap": true, 00:06:53.154 "flush": true, 00:06:53.154 "reset": true, 00:06:53.154 "nvme_admin": false, 00:06:53.154 "nvme_io": false, 00:06:53.154 "nvme_io_md": false, 00:06:53.154 "write_zeroes": true, 00:06:53.154 "zcopy": true, 00:06:53.154 "get_zone_info": false, 00:06:53.154 "zone_management": false, 00:06:53.154 "zone_append": false, 00:06:53.154 "compare": false, 00:06:53.154 "compare_and_write": false, 00:06:53.154 "abort": true, 00:06:53.154 "seek_hole": false, 00:06:53.154 "seek_data": false, 00:06:53.154 "copy": true, 00:06:53.154 "nvme_iov_md": false 00:06:53.154 }, 00:06:53.154 "memory_domains": [ 00:06:53.154 { 00:06:53.154 "dma_device_id": "system", 00:06:53.154 "dma_device_type": 1 00:06:53.154 }, 00:06:53.154 { 00:06:53.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.154 "dma_device_type": 2 00:06:53.154 } 00:06:53.154 ], 00:06:53.154 "driver_specific": {} 00:06:53.154 } 00:06:53.154 ]' 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:53.154 07:13:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:53.154 00:06:53.154 real 0m0.149s 00:06:53.154 user 0m0.105s 00:06:53.154 sys 0m0.012s 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.154 07:13:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:53.154 ************************************ 00:06:53.154 END TEST rpc_plugins 00:06:53.154 ************************************ 00:06:53.415 07:13:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:53.415 07:13:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.415 07:13:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.415 07:13:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 ************************************ 00:06:53.415 START TEST rpc_trace_cmd_test 00:06:53.415 ************************************ 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:53.415 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4076671", 00:06:53.415 "tpoint_group_mask": "0x8", 00:06:53.415 "iscsi_conn": { 00:06:53.415 "mask": "0x2", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "scsi": { 00:06:53.415 "mask": "0x4", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "bdev": { 00:06:53.415 "mask": "0x8", 00:06:53.415 "tpoint_mask": "0xffffffffffffffff" 00:06:53.415 }, 00:06:53.415 "nvmf_rdma": { 00:06:53.415 "mask": "0x10", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "nvmf_tcp": { 00:06:53.415 "mask": "0x20", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "ftl": { 00:06:53.415 "mask": "0x40", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "blobfs": { 00:06:53.415 "mask": "0x80", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "dsa": { 00:06:53.415 "mask": "0x200", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "thread": { 00:06:53.415 "mask": "0x400", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "nvme_pcie": { 00:06:53.415 "mask": "0x800", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "iaa": { 00:06:53.415 "mask": "0x1000", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "nvme_tcp": { 00:06:53.415 "mask": "0x2000", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "bdev_nvme": { 00:06:53.415 "mask": "0x4000", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 }, 00:06:53.415 "sock": { 00:06:53.415 "mask": "0x8000", 00:06:53.415 "tpoint_mask": "0x0" 00:06:53.415 } 00:06:53.415 }' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:53.415 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:53.675 07:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:53.675 00:06:53.675 real 0m0.244s 00:06:53.675 user 0m0.207s 00:06:53.675 sys 0m0.031s 00:06:53.675 07:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.675 07:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.675 ************************************ 00:06:53.675 END TEST rpc_trace_cmd_test 00:06:53.675 ************************************ 00:06:53.675 07:13:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:53.675 07:13:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:53.675 07:13:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:53.675 07:13:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.675 07:13:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.675 07:13:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.675 ************************************ 00:06:53.675 START TEST rpc_daemon_integrity 00:06:53.675 ************************************ 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:53.675 { 00:06:53.675 "name": "Malloc2", 00:06:53.675 "aliases": [ 00:06:53.675 "a038f0c3-52b7-4a82-abc7-f1650682b07c" 00:06:53.675 ], 00:06:53.675 "product_name": "Malloc disk", 00:06:53.675 "block_size": 512, 00:06:53.675 "num_blocks": 16384, 00:06:53.675 "uuid": "a038f0c3-52b7-4a82-abc7-f1650682b07c", 00:06:53.675 "assigned_rate_limits": { 00:06:53.675 "rw_ios_per_sec": 0, 00:06:53.675 "rw_mbytes_per_sec": 0, 00:06:53.675 "r_mbytes_per_sec": 0, 00:06:53.675 "w_mbytes_per_sec": 0 00:06:53.675 }, 00:06:53.675 "claimed": false, 00:06:53.675 "zoned": false, 00:06:53.675 "supported_io_types": { 00:06:53.675 "read": true, 00:06:53.675 "write": true, 00:06:53.675 "unmap": true, 00:06:53.675 "flush": true, 00:06:53.675 "reset": true, 00:06:53.675 "nvme_admin": false, 00:06:53.675 "nvme_io": false, 00:06:53.675 "nvme_io_md": false, 00:06:53.675 "write_zeroes": true, 00:06:53.675 "zcopy": true, 00:06:53.675 "get_zone_info": false, 00:06:53.675 "zone_management": false, 00:06:53.675 "zone_append": false, 00:06:53.675 "compare": false, 00:06:53.675 "compare_and_write": false, 00:06:53.675 "abort": true, 00:06:53.675 "seek_hole": false, 00:06:53.675 "seek_data": false, 00:06:53.675 "copy": true, 00:06:53.675 "nvme_iov_md": false 00:06:53.675 }, 00:06:53.675 "memory_domains": [ 00:06:53.675 { 00:06:53.675 "dma_device_id": "system", 00:06:53.675 "dma_device_type": 1 00:06:53.675 }, 00:06:53.675 { 00:06:53.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.675 "dma_device_type": 2 00:06:53.675 } 00:06:53.675 ], 00:06:53.675 "driver_specific": {} 00:06:53.675 } 00:06:53.675 ]' 00:06:53.675 07:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:53.675 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:53.675 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:53.675 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.675 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.675 [2024-07-25 07:13:01.026002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:53.675 [2024-07-25 07:13:01.026033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.675 [2024-07-25 07:13:01.026045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9e7680 00:06:53.675 [2024-07-25 07:13:01.026051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.675 [2024-07-25 07:13:01.027275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.675 [2024-07-25 07:13:01.027293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:53.675 Passthru0 00:06:53.675 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.675 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:53.676 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.676 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.936 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.936 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:53.936 { 00:06:53.936 "name": "Malloc2", 00:06:53.936 "aliases": [ 00:06:53.936 "a038f0c3-52b7-4a82-abc7-f1650682b07c" 00:06:53.936 ], 00:06:53.936 "product_name": "Malloc disk", 00:06:53.936 "block_size": 512, 00:06:53.937 "num_blocks": 16384, 00:06:53.937 "uuid": "a038f0c3-52b7-4a82-abc7-f1650682b07c", 00:06:53.937 "assigned_rate_limits": { 00:06:53.937 "rw_ios_per_sec": 0, 00:06:53.937 "rw_mbytes_per_sec": 0, 00:06:53.937 "r_mbytes_per_sec": 0, 00:06:53.937 "w_mbytes_per_sec": 0 00:06:53.937 }, 00:06:53.937 "claimed": true, 00:06:53.937 "claim_type": "exclusive_write", 00:06:53.937 "zoned": false, 00:06:53.937 "supported_io_types": { 00:06:53.937 "read": true, 00:06:53.937 "write": true, 00:06:53.937 "unmap": true, 00:06:53.937 "flush": true, 00:06:53.937 "reset": true, 00:06:53.937 "nvme_admin": false, 00:06:53.937 "nvme_io": false, 00:06:53.937 "nvme_io_md": false, 00:06:53.937 "write_zeroes": true, 00:06:53.937 "zcopy": true, 00:06:53.937 "get_zone_info": false, 00:06:53.937 "zone_management": false, 00:06:53.937 "zone_append": false, 00:06:53.937 "compare": false, 00:06:53.937 "compare_and_write": false, 00:06:53.937 "abort": true, 00:06:53.937 "seek_hole": false, 00:06:53.937 "seek_data": false, 00:06:53.937 "copy": true, 00:06:53.937 "nvme_iov_md": false 00:06:53.937 }, 00:06:53.937 "memory_domains": [ 00:06:53.937 { 00:06:53.937 "dma_device_id": "system", 00:06:53.937 "dma_device_type": 1 00:06:53.937 }, 00:06:53.937 { 00:06:53.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.937 "dma_device_type": 2 00:06:53.937 } 00:06:53.937 ], 00:06:53.937 "driver_specific": {} 00:06:53.937 }, 00:06:53.937 { 00:06:53.937 "name": "Passthru0", 00:06:53.937 "aliases": [ 00:06:53.937 "d87d51b8-4d91-5aa2-ab88-82680d24fef7" 00:06:53.937 ], 00:06:53.937 "product_name": "passthru", 00:06:53.937 "block_size": 512, 00:06:53.937 "num_blocks": 16384, 00:06:53.937 "uuid": "d87d51b8-4d91-5aa2-ab88-82680d24fef7", 00:06:53.937 "assigned_rate_limits": { 00:06:53.937 "rw_ios_per_sec": 0, 00:06:53.937 "rw_mbytes_per_sec": 0, 00:06:53.937 "r_mbytes_per_sec": 0, 00:06:53.937 "w_mbytes_per_sec": 0 00:06:53.937 }, 00:06:53.937 "claimed": false, 00:06:53.937 "zoned": false, 00:06:53.937 "supported_io_types": { 00:06:53.937 "read": true, 00:06:53.937 "write": true, 00:06:53.937 "unmap": true, 00:06:53.937 "flush": true, 00:06:53.937 "reset": true, 00:06:53.937 "nvme_admin": false, 00:06:53.937 "nvme_io": false, 00:06:53.937 "nvme_io_md": false, 00:06:53.937 "write_zeroes": true, 00:06:53.937 "zcopy": true, 00:06:53.937 "get_zone_info": false, 00:06:53.937 "zone_management": false, 00:06:53.937 "zone_append": false, 00:06:53.937 "compare": false, 00:06:53.937 "compare_and_write": false, 00:06:53.937 "abort": true, 00:06:53.937 "seek_hole": false, 00:06:53.937 "seek_data": false, 00:06:53.937 "copy": true, 00:06:53.937 "nvme_iov_md": false 00:06:53.937 }, 00:06:53.937 "memory_domains": [ 00:06:53.937 { 00:06:53.937 "dma_device_id": "system", 00:06:53.937 "dma_device_type": 1 00:06:53.937 }, 00:06:53.937 { 00:06:53.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.937 "dma_device_type": 2 00:06:53.937 } 00:06:53.937 ], 00:06:53.937 "driver_specific": { 00:06:53.937 "passthru": { 00:06:53.937 "name": "Passthru0", 00:06:53.937 "base_bdev_name": "Malloc2" 00:06:53.937 } 00:06:53.937 } 00:06:53.937 } 00:06:53.937 ]' 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:53.937 00:06:53.937 real 0m0.286s 00:06:53.937 user 0m0.198s 00:06:53.937 sys 0m0.037s 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.937 07:13:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:53.937 ************************************ 00:06:53.937 END TEST rpc_daemon_integrity 00:06:53.937 ************************************ 00:06:53.937 07:13:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:53.937 07:13:01 rpc -- rpc/rpc.sh@84 -- # killprocess 4076671 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@950 -- # '[' -z 4076671 ']' 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@954 -- # kill -0 4076671 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@955 -- # uname 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4076671 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4076671' 00:06:53.937 killing process with pid 4076671 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@969 -- # kill 4076671 00:06:53.937 07:13:01 rpc -- common/autotest_common.sh@974 -- # wait 4076671 00:06:54.198 00:06:54.198 real 0m2.497s 00:06:54.198 user 0m3.305s 00:06:54.198 sys 0m0.704s 00:06:54.198 07:13:01 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.198 07:13:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 ************************************ 00:06:54.198 END TEST rpc 00:06:54.198 ************************************ 00:06:54.198 07:13:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:54.198 07:13:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.198 07:13:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.198 07:13:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.198 ************************************ 00:06:54.198 START TEST skip_rpc 00:06:54.198 ************************************ 00:06:54.198 07:13:01 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:54.458 * Looking for test storage... 00:06:54.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:54.459 07:13:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:54.459 07:13:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:54.459 07:13:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:54.459 07:13:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.459 07:13:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.459 07:13:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.459 ************************************ 00:06:54.459 START TEST skip_rpc 00:06:54.459 ************************************ 00:06:54.459 07:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:54.459 07:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4077223 00:06:54.459 07:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.459 07:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:54.459 07:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:54.459 [2024-07-25 07:13:01.758220] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:06:54.459 [2024-07-25 07:13:01.758283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4077223 ] 00:06:54.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.459 [2024-07-25 07:13:01.821526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.721 [2024-07-25 07:13:01.896905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.034 07:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4077223 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 4077223 ']' 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 4077223 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4077223 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4077223' 00:07:00.035 killing process with pid 4077223 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 4077223 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 4077223 00:07:00.035 00:07:00.035 real 0m5.279s 00:07:00.035 user 0m5.085s 00:07:00.035 sys 0m0.234s 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.035 07:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.035 ************************************ 00:07:00.035 END TEST skip_rpc 00:07:00.035 ************************************ 00:07:00.035 07:13:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:00.035 07:13:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.035 07:13:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.035 07:13:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.035 ************************************ 00:07:00.035 START TEST skip_rpc_with_json 00:07:00.035 ************************************ 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4078410 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4078410 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 4078410 ']' 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.035 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.035 [2024-07-25 07:13:07.110772] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:00.035 [2024-07-25 07:13:07.110825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078410 ] 00:07:00.035 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.035 [2024-07-25 07:13:07.171655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.035 [2024-07-25 07:13:07.242841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.606 [2024-07-25 07:13:07.873800] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:00.606 request: 00:07:00.606 { 00:07:00.606 "trtype": "tcp", 00:07:00.606 "method": "nvmf_get_transports", 00:07:00.606 "req_id": 1 00:07:00.606 } 00:07:00.606 Got JSON-RPC error response 00:07:00.606 response: 00:07:00.606 { 00:07:00.606 "code": -19, 00:07:00.606 "message": "No such device" 00:07:00.606 } 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.606 [2024-07-25 07:13:07.885921] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.606 07:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.868 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.868 07:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:00.868 { 00:07:00.868 "subsystems": [ 00:07:00.868 { 00:07:00.868 "subsystem": "vfio_user_target", 00:07:00.868 "config": null 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "keyring", 00:07:00.868 "config": [] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "iobuf", 00:07:00.868 "config": [ 00:07:00.868 { 00:07:00.868 "method": "iobuf_set_options", 00:07:00.868 "params": { 00:07:00.868 "small_pool_count": 8192, 00:07:00.868 "large_pool_count": 1024, 00:07:00.868 "small_bufsize": 8192, 00:07:00.868 "large_bufsize": 135168 00:07:00.868 } 00:07:00.868 } 00:07:00.868 ] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "sock", 00:07:00.868 "config": [ 00:07:00.868 { 00:07:00.868 "method": "sock_set_default_impl", 00:07:00.868 "params": { 00:07:00.868 "impl_name": "posix" 00:07:00.868 } 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "method": "sock_impl_set_options", 00:07:00.868 "params": { 00:07:00.868 "impl_name": "ssl", 00:07:00.868 "recv_buf_size": 4096, 00:07:00.868 "send_buf_size": 4096, 00:07:00.868 "enable_recv_pipe": true, 00:07:00.868 "enable_quickack": false, 00:07:00.868 "enable_placement_id": 0, 00:07:00.868 "enable_zerocopy_send_server": true, 00:07:00.868 "enable_zerocopy_send_client": false, 00:07:00.868 "zerocopy_threshold": 0, 00:07:00.868 "tls_version": 0, 00:07:00.868 "enable_ktls": false 00:07:00.868 } 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "method": "sock_impl_set_options", 00:07:00.868 "params": { 00:07:00.868 "impl_name": "posix", 00:07:00.868 "recv_buf_size": 2097152, 00:07:00.868 "send_buf_size": 2097152, 00:07:00.868 "enable_recv_pipe": true, 00:07:00.868 "enable_quickack": false, 00:07:00.868 "enable_placement_id": 0, 00:07:00.868 "enable_zerocopy_send_server": true, 00:07:00.868 "enable_zerocopy_send_client": false, 00:07:00.868 "zerocopy_threshold": 0, 00:07:00.868 "tls_version": 0, 00:07:00.868 "enable_ktls": false 00:07:00.868 } 00:07:00.868 } 00:07:00.868 ] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "vmd", 00:07:00.868 "config": [] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "accel", 00:07:00.868 "config": [ 00:07:00.868 { 00:07:00.868 "method": "accel_set_options", 00:07:00.868 "params": { 00:07:00.868 "small_cache_size": 128, 00:07:00.868 "large_cache_size": 16, 00:07:00.868 "task_count": 2048, 00:07:00.868 "sequence_count": 2048, 00:07:00.868 "buf_count": 2048 00:07:00.868 } 00:07:00.868 } 00:07:00.868 ] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "bdev", 00:07:00.868 "config": [ 00:07:00.868 { 00:07:00.868 "method": "bdev_set_options", 00:07:00.868 "params": { 00:07:00.868 "bdev_io_pool_size": 65535, 00:07:00.868 "bdev_io_cache_size": 256, 00:07:00.868 "bdev_auto_examine": true, 00:07:00.868 "iobuf_small_cache_size": 128, 00:07:00.868 "iobuf_large_cache_size": 16 00:07:00.868 } 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "method": "bdev_raid_set_options", 00:07:00.868 "params": { 00:07:00.868 "process_window_size_kb": 1024, 00:07:00.868 "process_max_bandwidth_mb_sec": 0 00:07:00.868 } 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "method": "bdev_iscsi_set_options", 00:07:00.868 "params": { 00:07:00.868 "timeout_sec": 30 00:07:00.868 } 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "method": "bdev_nvme_set_options", 00:07:00.868 "params": { 00:07:00.868 "action_on_timeout": "none", 00:07:00.868 "timeout_us": 0, 00:07:00.868 "timeout_admin_us": 0, 00:07:00.868 "keep_alive_timeout_ms": 10000, 00:07:00.868 "arbitration_burst": 0, 00:07:00.868 "low_priority_weight": 0, 00:07:00.868 "medium_priority_weight": 0, 00:07:00.868 "high_priority_weight": 0, 00:07:00.868 "nvme_adminq_poll_period_us": 10000, 00:07:00.868 "nvme_ioq_poll_period_us": 0, 00:07:00.868 "io_queue_requests": 0, 00:07:00.868 "delay_cmd_submit": true, 00:07:00.868 "transport_retry_count": 4, 00:07:00.868 "bdev_retry_count": 3, 00:07:00.868 "transport_ack_timeout": 0, 00:07:00.868 "ctrlr_loss_timeout_sec": 0, 00:07:00.868 "reconnect_delay_sec": 0, 00:07:00.868 "fast_io_fail_timeout_sec": 0, 00:07:00.868 "disable_auto_failback": false, 00:07:00.868 "generate_uuids": false, 00:07:00.868 "transport_tos": 0, 00:07:00.868 "nvme_error_stat": false, 00:07:00.868 "rdma_srq_size": 0, 00:07:00.868 "io_path_stat": false, 00:07:00.868 "allow_accel_sequence": false, 00:07:00.868 "rdma_max_cq_size": 0, 00:07:00.868 "rdma_cm_event_timeout_ms": 0, 00:07:00.868 "dhchap_digests": [ 00:07:00.868 "sha256", 00:07:00.868 "sha384", 00:07:00.868 "sha512" 00:07:00.868 ], 00:07:00.868 "dhchap_dhgroups": [ 00:07:00.868 "null", 00:07:00.868 "ffdhe2048", 00:07:00.868 "ffdhe3072", 00:07:00.868 "ffdhe4096", 00:07:00.868 "ffdhe6144", 00:07:00.868 "ffdhe8192" 00:07:00.868 ] 00:07:00.868 } 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "method": "bdev_nvme_set_hotplug", 00:07:00.868 "params": { 00:07:00.868 "period_us": 100000, 00:07:00.868 "enable": false 00:07:00.868 } 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "method": "bdev_wait_for_examine" 00:07:00.868 } 00:07:00.868 ] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "scsi", 00:07:00.868 "config": null 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "scheduler", 00:07:00.868 "config": [ 00:07:00.868 { 00:07:00.868 "method": "framework_set_scheduler", 00:07:00.868 "params": { 00:07:00.868 "name": "static" 00:07:00.868 } 00:07:00.868 } 00:07:00.868 ] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "vhost_scsi", 00:07:00.868 "config": [] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "vhost_blk", 00:07:00.868 "config": [] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "ublk", 00:07:00.868 "config": [] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "nbd", 00:07:00.868 "config": [] 00:07:00.868 }, 00:07:00.868 { 00:07:00.868 "subsystem": "nvmf", 00:07:00.868 "config": [ 00:07:00.868 { 00:07:00.869 "method": "nvmf_set_config", 00:07:00.869 "params": { 00:07:00.869 "discovery_filter": "match_any", 00:07:00.869 "admin_cmd_passthru": { 00:07:00.869 "identify_ctrlr": false 00:07:00.869 } 00:07:00.869 } 00:07:00.869 }, 00:07:00.869 { 00:07:00.869 "method": "nvmf_set_max_subsystems", 00:07:00.869 "params": { 00:07:00.869 "max_subsystems": 1024 00:07:00.869 } 00:07:00.869 }, 00:07:00.869 { 00:07:00.869 "method": "nvmf_set_crdt", 00:07:00.869 "params": { 00:07:00.869 "crdt1": 0, 00:07:00.869 "crdt2": 0, 00:07:00.869 "crdt3": 0 00:07:00.869 } 00:07:00.869 }, 00:07:00.869 { 00:07:00.869 "method": "nvmf_create_transport", 00:07:00.869 "params": { 00:07:00.869 "trtype": "TCP", 00:07:00.869 "max_queue_depth": 128, 00:07:00.869 "max_io_qpairs_per_ctrlr": 127, 00:07:00.869 "in_capsule_data_size": 4096, 00:07:00.869 "max_io_size": 131072, 00:07:00.869 "io_unit_size": 131072, 00:07:00.869 "max_aq_depth": 128, 00:07:00.869 "num_shared_buffers": 511, 00:07:00.869 "buf_cache_size": 4294967295, 00:07:00.869 "dif_insert_or_strip": false, 00:07:00.869 "zcopy": false, 00:07:00.869 "c2h_success": true, 00:07:00.869 "sock_priority": 0, 00:07:00.869 "abort_timeout_sec": 1, 00:07:00.869 "ack_timeout": 0, 00:07:00.869 "data_wr_pool_size": 0 00:07:00.869 } 00:07:00.869 } 00:07:00.869 ] 00:07:00.869 }, 00:07:00.869 { 00:07:00.869 "subsystem": "iscsi", 00:07:00.869 "config": [ 00:07:00.869 { 00:07:00.869 "method": "iscsi_set_options", 00:07:00.869 "params": { 00:07:00.869 "node_base": "iqn.2016-06.io.spdk", 00:07:00.869 "max_sessions": 128, 00:07:00.869 "max_connections_per_session": 2, 00:07:00.869 "max_queue_depth": 64, 00:07:00.869 "default_time2wait": 2, 00:07:00.869 "default_time2retain": 20, 00:07:00.869 "first_burst_length": 8192, 00:07:00.869 "immediate_data": true, 00:07:00.869 "allow_duplicated_isid": false, 00:07:00.869 "error_recovery_level": 0, 00:07:00.869 "nop_timeout": 60, 00:07:00.869 "nop_in_interval": 30, 00:07:00.869 "disable_chap": false, 00:07:00.869 "require_chap": false, 00:07:00.869 "mutual_chap": false, 00:07:00.869 "chap_group": 0, 00:07:00.869 "max_large_datain_per_connection": 64, 00:07:00.869 "max_r2t_per_connection": 4, 00:07:00.869 "pdu_pool_size": 36864, 00:07:00.869 "immediate_data_pool_size": 16384, 00:07:00.869 "data_out_pool_size": 2048 00:07:00.869 } 00:07:00.869 } 00:07:00.869 ] 00:07:00.869 } 00:07:00.869 ] 00:07:00.869 } 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4078410 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4078410 ']' 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4078410 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4078410 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4078410' 00:07:00.869 killing process with pid 4078410 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4078410 00:07:00.869 07:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4078410 00:07:01.130 07:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4078582 00:07:01.130 07:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:01.130 07:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4078582 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4078582 ']' 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4078582 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4078582 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4078582' 00:07:06.423 killing process with pid 4078582 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4078582 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4078582 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:06.423 00:07:06.423 real 0m6.535s 00:07:06.423 user 0m6.402s 00:07:06.423 sys 0m0.525s 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:06.423 ************************************ 00:07:06.423 END TEST skip_rpc_with_json 00:07:06.423 ************************************ 00:07:06.423 07:13:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:06.423 07:13:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.423 07:13:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.423 07:13:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.423 ************************************ 00:07:06.423 START TEST skip_rpc_with_delay 00:07:06.423 ************************************ 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:06.423 [2024-07-25 07:13:13.730538] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:06.423 [2024-07-25 07:13:13.730628] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.423 00:07:06.423 real 0m0.077s 00:07:06.423 user 0m0.040s 00:07:06.423 sys 0m0.037s 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.423 07:13:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:06.423 ************************************ 00:07:06.423 END TEST skip_rpc_with_delay 00:07:06.423 ************************************ 00:07:06.423 07:13:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:06.685 07:13:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:06.685 07:13:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:06.685 07:13:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.685 07:13:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.685 07:13:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.685 ************************************ 00:07:06.685 START TEST exit_on_failed_rpc_init 00:07:06.685 ************************************ 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4079822 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4079822 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 4079822 ']' 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.685 07:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:06.685 [2024-07-25 07:13:13.890831] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:06.685 [2024-07-25 07:13:13.890892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079822 ] 00:07:06.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.685 [2024-07-25 07:13:13.954133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.685 [2024-07-25 07:13:14.029134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:07.628 [2024-07-25 07:13:14.721145] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:07.628 [2024-07-25 07:13:14.721219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079978 ] 00:07:07.628 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.628 [2024-07-25 07:13:14.797895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.628 [2024-07-25 07:13:14.861577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.628 [2024-07-25 07:13:14.861642] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:07.628 [2024-07-25 07:13:14.861651] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:07.628 [2024-07-25 07:13:14.861658] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4079822 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 4079822 ']' 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 4079822 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4079822 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4079822' 00:07:07.628 killing process with pid 4079822 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 4079822 00:07:07.628 07:13:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 4079822 00:07:07.889 00:07:07.889 real 0m1.349s 00:07:07.889 user 0m1.592s 00:07:07.889 sys 0m0.366s 00:07:07.889 07:13:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.889 07:13:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:07.889 ************************************ 00:07:07.889 END TEST exit_on_failed_rpc_init 00:07:07.889 ************************************ 00:07:07.889 07:13:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:07.889 00:07:07.889 real 0m13.657s 00:07:07.889 user 0m13.263s 00:07:07.889 sys 0m1.461s 00:07:07.889 07:13:15 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.889 07:13:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.889 ************************************ 00:07:07.890 END TEST skip_rpc 00:07:07.890 ************************************ 00:07:07.890 07:13:15 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:07.890 07:13:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.890 07:13:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.890 07:13:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.160 ************************************ 00:07:08.160 START TEST rpc_client 00:07:08.160 ************************************ 00:07:08.160 07:13:15 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:08.160 * Looking for test storage... 00:07:08.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:08.160 07:13:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:08.160 OK 00:07:08.160 07:13:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:08.160 00:07:08.160 real 0m0.126s 00:07:08.160 user 0m0.055s 00:07:08.160 sys 0m0.080s 00:07:08.160 07:13:15 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.160 07:13:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:08.160 ************************************ 00:07:08.160 END TEST rpc_client 00:07:08.160 ************************************ 00:07:08.160 07:13:15 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:08.160 07:13:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.160 07:13:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.160 07:13:15 -- common/autotest_common.sh@10 -- # set +x 00:07:08.160 ************************************ 00:07:08.160 START TEST json_config 00:07:08.160 ************************************ 00:07:08.160 07:13:15 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.427 07:13:15 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.427 07:13:15 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.427 07:13:15 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.427 07:13:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.427 07:13:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.427 07:13:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.427 07:13:15 json_config -- paths/export.sh@5 -- # export PATH 00:07:08.427 07:13:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@47 -- # : 0 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.427 07:13:15 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:08.427 07:13:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:08.428 INFO: JSON configuration test init 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.428 07:13:15 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:08.428 07:13:15 json_config -- json_config/common.sh@9 -- # local app=target 00:07:08.428 07:13:15 json_config -- json_config/common.sh@10 -- # shift 00:07:08.428 07:13:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:08.428 07:13:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:08.428 07:13:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:08.428 07:13:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.428 07:13:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.428 07:13:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4080302 00:07:08.428 07:13:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:08.428 Waiting for target to run... 00:07:08.428 07:13:15 json_config -- json_config/common.sh@25 -- # waitforlisten 4080302 /var/tmp/spdk_tgt.sock 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@831 -- # '[' -z 4080302 ']' 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:08.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.428 07:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.428 07:13:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:08.428 [2024-07-25 07:13:15.654421] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:08.428 [2024-07-25 07:13:15.654498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080302 ] 00:07:08.428 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.689 [2024-07-25 07:13:15.905154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.689 [2024-07-25 07:13:15.954951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.261 07:13:16 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.261 07:13:16 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:09.261 07:13:16 json_config -- json_config/common.sh@26 -- # echo '' 00:07:09.261 00:07:09.261 07:13:16 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:09.261 07:13:16 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:09.261 07:13:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.261 07:13:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.261 07:13:16 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:09.261 07:13:16 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:09.261 07:13:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.262 07:13:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.262 07:13:16 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:09.262 07:13:16 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:09.262 07:13:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:09.833 07:13:16 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:09.833 07:13:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:09.833 07:13:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.833 07:13:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 07:13:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:09.833 07:13:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:09.833 07:13:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:09.833 07:13:16 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:09.833 07:13:16 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:09.833 07:13:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@51 -- # sort 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:09.833 07:13:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.833 07:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:09.833 07:13:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.833 07:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:07:09.833 07:13:17 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:09.833 07:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:10.094 MallocForNvmf0 00:07:10.094 07:13:17 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:10.094 07:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:10.355 MallocForNvmf1 00:07:10.355 07:13:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:10.355 07:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:10.355 [2024-07-25 07:13:17.642902] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.355 07:13:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.355 07:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.615 07:13:17 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:10.616 07:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:10.616 07:13:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:10.616 07:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:10.876 07:13:18 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:10.876 07:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:10.876 [2024-07-25 07:13:18.228875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:10.876 07:13:18 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:10.876 07:13:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.876 07:13:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.137 07:13:18 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:11.137 07:13:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.137 07:13:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.137 07:13:18 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:11.137 07:13:18 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:11.137 07:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:11.138 MallocBdevForConfigChangeCheck 00:07:11.138 07:13:18 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:11.138 07:13:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.138 07:13:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.399 07:13:18 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:11.399 07:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:11.659 07:13:18 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:11.659 INFO: shutting down applications... 00:07:11.659 07:13:18 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:11.659 07:13:18 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:11.659 07:13:18 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:11.659 07:13:18 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:11.920 Calling clear_iscsi_subsystem 00:07:11.920 Calling clear_nvmf_subsystem 00:07:11.920 Calling clear_nbd_subsystem 00:07:11.920 Calling clear_ublk_subsystem 00:07:11.920 Calling clear_vhost_blk_subsystem 00:07:11.920 Calling clear_vhost_scsi_subsystem 00:07:11.920 Calling clear_bdev_subsystem 00:07:11.920 07:13:19 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:11.921 07:13:19 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:11.921 07:13:19 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:11.921 07:13:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:11.921 07:13:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:11.921 07:13:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:12.183 07:13:19 json_config -- json_config/json_config.sh@349 -- # break 00:07:12.183 07:13:19 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:12.183 07:13:19 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:12.183 07:13:19 json_config -- json_config/common.sh@31 -- # local app=target 00:07:12.183 07:13:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:12.183 07:13:19 json_config -- json_config/common.sh@35 -- # [[ -n 4080302 ]] 00:07:12.183 07:13:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4080302 00:07:12.183 07:13:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:12.183 07:13:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.183 07:13:19 json_config -- json_config/common.sh@41 -- # kill -0 4080302 00:07:12.183 07:13:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.755 07:13:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.755 07:13:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.755 07:13:20 json_config -- json_config/common.sh@41 -- # kill -0 4080302 00:07:12.755 07:13:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:12.755 07:13:20 json_config -- json_config/common.sh@43 -- # break 00:07:12.755 07:13:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:12.755 07:13:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:12.755 SPDK target shutdown done 00:07:12.755 07:13:20 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:12.755 INFO: relaunching applications... 00:07:12.755 07:13:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:12.755 07:13:20 json_config -- json_config/common.sh@9 -- # local app=target 00:07:12.755 07:13:20 json_config -- json_config/common.sh@10 -- # shift 00:07:12.755 07:13:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:12.755 07:13:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:12.755 07:13:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:12.755 07:13:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.755 07:13:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.755 07:13:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4081231 00:07:12.755 07:13:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:12.755 Waiting for target to run... 00:07:12.755 07:13:20 json_config -- json_config/common.sh@25 -- # waitforlisten 4081231 /var/tmp/spdk_tgt.sock 00:07:12.755 07:13:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:12.755 07:13:20 json_config -- common/autotest_common.sh@831 -- # '[' -z 4081231 ']' 00:07:12.755 07:13:20 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:12.755 07:13:20 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.755 07:13:20 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:12.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:12.755 07:13:20 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.755 07:13:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.755 [2024-07-25 07:13:20.112330] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:12.755 [2024-07-25 07:13:20.112399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081231 ] 00:07:13.016 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.277 [2024-07-25 07:13:20.474013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.277 [2024-07-25 07:13:20.537153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.849 [2024-07-25 07:13:21.033715] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.849 [2024-07-25 07:13:21.066089] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:13.849 07:13:21 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.849 07:13:21 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:13.849 07:13:21 json_config -- json_config/common.sh@26 -- # echo '' 00:07:13.849 00:07:13.849 07:13:21 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:13.849 07:13:21 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:13.849 INFO: Checking if target configuration is the same... 00:07:13.849 07:13:21 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.849 07:13:21 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:13.849 07:13:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:13.849 + '[' 2 -ne 2 ']' 00:07:13.849 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:13.849 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:13.849 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:13.849 +++ basename /dev/fd/62 00:07:13.849 ++ mktemp /tmp/62.XXX 00:07:13.849 + tmp_file_1=/tmp/62.Yot 00:07:13.849 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.849 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:13.849 + tmp_file_2=/tmp/spdk_tgt_config.json.Yqj 00:07:13.849 + ret=0 00:07:13.849 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:14.110 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:14.110 + diff -u /tmp/62.Yot /tmp/spdk_tgt_config.json.Yqj 00:07:14.110 + echo 'INFO: JSON config files are the same' 00:07:14.110 INFO: JSON config files are the same 00:07:14.110 + rm /tmp/62.Yot /tmp/spdk_tgt_config.json.Yqj 00:07:14.110 + exit 0 00:07:14.110 07:13:21 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:14.110 07:13:21 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:14.110 INFO: changing configuration and checking if this can be detected... 00:07:14.110 07:13:21 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:14.110 07:13:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:14.371 07:13:21 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:14.371 07:13:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:14.371 07:13:21 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.371 + '[' 2 -ne 2 ']' 00:07:14.371 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:14.371 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:14.371 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.371 +++ basename /dev/fd/62 00:07:14.371 ++ mktemp /tmp/62.XXX 00:07:14.371 + tmp_file_1=/tmp/62.t6l 00:07:14.371 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.371 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:14.371 + tmp_file_2=/tmp/spdk_tgt_config.json.H2A 00:07:14.371 + ret=0 00:07:14.371 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:14.633 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:14.633 + diff -u /tmp/62.t6l /tmp/spdk_tgt_config.json.H2A 00:07:14.633 + ret=1 00:07:14.633 + echo '=== Start of file: /tmp/62.t6l ===' 00:07:14.633 + cat /tmp/62.t6l 00:07:14.633 + echo '=== End of file: /tmp/62.t6l ===' 00:07:14.633 + echo '' 00:07:14.633 + echo '=== Start of file: /tmp/spdk_tgt_config.json.H2A ===' 00:07:14.633 + cat /tmp/spdk_tgt_config.json.H2A 00:07:14.633 + echo '=== End of file: /tmp/spdk_tgt_config.json.H2A ===' 00:07:14.633 + echo '' 00:07:14.633 + rm /tmp/62.t6l /tmp/spdk_tgt_config.json.H2A 00:07:14.633 + exit 1 00:07:14.633 07:13:21 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:14.633 INFO: configuration change detected. 00:07:14.633 07:13:21 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:14.633 07:13:21 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:14.633 07:13:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.633 07:13:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@321 -- # [[ -n 4081231 ]] 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:14.634 07:13:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.634 07:13:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:14.634 07:13:21 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:14.634 07:13:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:14.634 07:13:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.894 07:13:22 json_config -- json_config/json_config.sh@327 -- # killprocess 4081231 00:07:14.894 07:13:22 json_config -- common/autotest_common.sh@950 -- # '[' -z 4081231 ']' 00:07:14.894 07:13:22 json_config -- common/autotest_common.sh@954 -- # kill -0 4081231 00:07:14.894 07:13:22 json_config -- common/autotest_common.sh@955 -- # uname 00:07:14.894 07:13:22 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.894 07:13:22 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4081231 00:07:14.895 07:13:22 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.895 07:13:22 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.895 07:13:22 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4081231' 00:07:14.895 killing process with pid 4081231 00:07:14.895 07:13:22 json_config -- common/autotest_common.sh@969 -- # kill 4081231 00:07:14.895 07:13:22 json_config -- common/autotest_common.sh@974 -- # wait 4081231 00:07:15.156 07:13:22 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:15.156 07:13:22 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:15.156 07:13:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.156 07:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.156 07:13:22 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:15.156 07:13:22 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:15.156 INFO: Success 00:07:15.156 00:07:15.156 real 0m6.928s 00:07:15.156 user 0m8.325s 00:07:15.156 sys 0m1.699s 00:07:15.156 07:13:22 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.156 07:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.156 ************************************ 00:07:15.156 END TEST json_config 00:07:15.156 ************************************ 00:07:15.156 07:13:22 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:15.156 07:13:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.156 07:13:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.156 07:13:22 -- common/autotest_common.sh@10 -- # set +x 00:07:15.156 ************************************ 00:07:15.156 START TEST json_config_extra_key 00:07:15.156 ************************************ 00:07:15.156 07:13:22 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:15.418 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.418 07:13:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.419 07:13:22 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.419 07:13:22 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.419 07:13:22 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.419 07:13:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.419 07:13:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.419 07:13:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.419 07:13:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:15.419 07:13:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.419 07:13:22 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:15.419 INFO: launching applications... 00:07:15.419 07:13:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4082008 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:15.419 Waiting for target to run... 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4082008 /var/tmp/spdk_tgt.sock 00:07:15.419 07:13:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:15.419 07:13:22 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 4082008 ']' 00:07:15.419 07:13:22 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:15.419 07:13:22 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.419 07:13:22 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:15.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:15.419 07:13:22 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.419 07:13:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:15.419 [2024-07-25 07:13:22.664773] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:15.419 [2024-07-25 07:13:22.664843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082008 ] 00:07:15.419 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.680 [2024-07-25 07:13:22.934190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.680 [2024-07-25 07:13:22.987555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.251 07:13:23 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.251 07:13:23 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:16.251 00:07:16.251 07:13:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:16.251 INFO: shutting down applications... 00:07:16.251 07:13:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4082008 ]] 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4082008 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4082008 00:07:16.251 07:13:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:16.551 07:13:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:16.551 07:13:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:16.831 07:13:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4082008 00:07:16.831 07:13:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:16.831 07:13:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:16.831 07:13:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:16.831 07:13:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:16.831 SPDK target shutdown done 00:07:16.831 07:13:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:16.831 Success 00:07:16.831 00:07:16.832 real 0m1.418s 00:07:16.832 user 0m1.047s 00:07:16.832 sys 0m0.373s 00:07:16.832 07:13:23 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.832 07:13:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:16.832 ************************************ 00:07:16.832 END TEST json_config_extra_key 00:07:16.832 ************************************ 00:07:16.832 07:13:23 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:16.832 07:13:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.832 07:13:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.832 07:13:23 -- common/autotest_common.sh@10 -- # set +x 00:07:16.832 ************************************ 00:07:16.832 START TEST alias_rpc 00:07:16.832 ************************************ 00:07:16.832 07:13:23 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:16.832 * Looking for test storage... 00:07:16.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:16.832 07:13:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:16.832 07:13:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4082321 00:07:16.832 07:13:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4082321 00:07:16.832 07:13:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:16.832 07:13:24 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 4082321 ']' 00:07:16.832 07:13:24 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.832 07:13:24 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.832 07:13:24 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.832 07:13:24 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.832 07:13:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.832 [2024-07-25 07:13:24.159746] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:16.832 [2024-07-25 07:13:24.159822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082321 ] 00:07:16.832 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.092 [2024-07-25 07:13:24.226635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.092 [2024-07-25 07:13:24.303323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.662 07:13:24 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.662 07:13:24 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:17.662 07:13:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:17.923 07:13:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4082321 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 4082321 ']' 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 4082321 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4082321 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4082321' 00:07:17.923 killing process with pid 4082321 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@969 -- # kill 4082321 00:07:17.923 07:13:25 alias_rpc -- common/autotest_common.sh@974 -- # wait 4082321 00:07:18.184 00:07:18.184 real 0m1.400s 00:07:18.184 user 0m1.533s 00:07:18.184 sys 0m0.399s 00:07:18.184 07:13:25 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.184 07:13:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.184 ************************************ 00:07:18.184 END TEST alias_rpc 00:07:18.184 ************************************ 00:07:18.184 07:13:25 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:18.184 07:13:25 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:18.184 07:13:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.184 07:13:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.184 07:13:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.184 ************************************ 00:07:18.184 START TEST spdkcli_tcp 00:07:18.184 ************************************ 00:07:18.184 07:13:25 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:18.446 * Looking for test storage... 00:07:18.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4082601 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4082601 00:07:18.446 07:13:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 4082601 ']' 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.446 07:13:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.446 [2024-07-25 07:13:25.638230] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:18.446 [2024-07-25 07:13:25.638303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082601 ] 00:07:18.446 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.446 [2024-07-25 07:13:25.706278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.446 [2024-07-25 07:13:25.784668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.446 [2024-07-25 07:13:25.784670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.389 07:13:26 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.389 07:13:26 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:19.389 07:13:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4082792 00:07:19.389 07:13:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:19.389 07:13:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:19.389 [ 00:07:19.389 "bdev_malloc_delete", 00:07:19.389 "bdev_malloc_create", 00:07:19.389 "bdev_null_resize", 00:07:19.389 "bdev_null_delete", 00:07:19.389 "bdev_null_create", 00:07:19.389 "bdev_nvme_cuse_unregister", 00:07:19.389 "bdev_nvme_cuse_register", 00:07:19.389 "bdev_opal_new_user", 00:07:19.389 "bdev_opal_set_lock_state", 00:07:19.389 "bdev_opal_delete", 00:07:19.389 "bdev_opal_get_info", 00:07:19.389 "bdev_opal_create", 00:07:19.389 "bdev_nvme_opal_revert", 00:07:19.389 "bdev_nvme_opal_init", 00:07:19.389 "bdev_nvme_send_cmd", 00:07:19.389 "bdev_nvme_get_path_iostat", 00:07:19.389 "bdev_nvme_get_mdns_discovery_info", 00:07:19.389 "bdev_nvme_stop_mdns_discovery", 00:07:19.389 "bdev_nvme_start_mdns_discovery", 00:07:19.389 "bdev_nvme_set_multipath_policy", 00:07:19.389 "bdev_nvme_set_preferred_path", 00:07:19.389 "bdev_nvme_get_io_paths", 00:07:19.389 "bdev_nvme_remove_error_injection", 00:07:19.389 "bdev_nvme_add_error_injection", 00:07:19.389 "bdev_nvme_get_discovery_info", 00:07:19.389 "bdev_nvme_stop_discovery", 00:07:19.389 "bdev_nvme_start_discovery", 00:07:19.389 "bdev_nvme_get_controller_health_info", 00:07:19.389 "bdev_nvme_disable_controller", 00:07:19.389 "bdev_nvme_enable_controller", 00:07:19.389 "bdev_nvme_reset_controller", 00:07:19.389 "bdev_nvme_get_transport_statistics", 00:07:19.389 "bdev_nvme_apply_firmware", 00:07:19.389 "bdev_nvme_detach_controller", 00:07:19.389 "bdev_nvme_get_controllers", 00:07:19.389 "bdev_nvme_attach_controller", 00:07:19.389 "bdev_nvme_set_hotplug", 00:07:19.389 "bdev_nvme_set_options", 00:07:19.389 "bdev_passthru_delete", 00:07:19.389 "bdev_passthru_create", 00:07:19.389 "bdev_lvol_set_parent_bdev", 00:07:19.389 "bdev_lvol_set_parent", 00:07:19.389 "bdev_lvol_check_shallow_copy", 00:07:19.389 "bdev_lvol_start_shallow_copy", 00:07:19.389 "bdev_lvol_grow_lvstore", 00:07:19.389 "bdev_lvol_get_lvols", 00:07:19.389 "bdev_lvol_get_lvstores", 00:07:19.389 "bdev_lvol_delete", 00:07:19.389 "bdev_lvol_set_read_only", 00:07:19.389 "bdev_lvol_resize", 00:07:19.389 "bdev_lvol_decouple_parent", 00:07:19.390 "bdev_lvol_inflate", 00:07:19.390 "bdev_lvol_rename", 00:07:19.390 "bdev_lvol_clone_bdev", 00:07:19.390 "bdev_lvol_clone", 00:07:19.390 "bdev_lvol_snapshot", 00:07:19.390 "bdev_lvol_create", 00:07:19.390 "bdev_lvol_delete_lvstore", 00:07:19.390 "bdev_lvol_rename_lvstore", 00:07:19.390 "bdev_lvol_create_lvstore", 00:07:19.390 "bdev_raid_set_options", 00:07:19.390 "bdev_raid_remove_base_bdev", 00:07:19.390 "bdev_raid_add_base_bdev", 00:07:19.390 "bdev_raid_delete", 00:07:19.390 "bdev_raid_create", 00:07:19.390 "bdev_raid_get_bdevs", 00:07:19.390 "bdev_error_inject_error", 00:07:19.390 "bdev_error_delete", 00:07:19.390 "bdev_error_create", 00:07:19.390 "bdev_split_delete", 00:07:19.390 "bdev_split_create", 00:07:19.390 "bdev_delay_delete", 00:07:19.390 "bdev_delay_create", 00:07:19.390 "bdev_delay_update_latency", 00:07:19.390 "bdev_zone_block_delete", 00:07:19.390 "bdev_zone_block_create", 00:07:19.390 "blobfs_create", 00:07:19.390 "blobfs_detect", 00:07:19.390 "blobfs_set_cache_size", 00:07:19.390 "bdev_aio_delete", 00:07:19.390 "bdev_aio_rescan", 00:07:19.390 "bdev_aio_create", 00:07:19.390 "bdev_ftl_set_property", 00:07:19.390 "bdev_ftl_get_properties", 00:07:19.390 "bdev_ftl_get_stats", 00:07:19.390 "bdev_ftl_unmap", 00:07:19.390 "bdev_ftl_unload", 00:07:19.390 "bdev_ftl_delete", 00:07:19.390 "bdev_ftl_load", 00:07:19.390 "bdev_ftl_create", 00:07:19.390 "bdev_virtio_attach_controller", 00:07:19.390 "bdev_virtio_scsi_get_devices", 00:07:19.390 "bdev_virtio_detach_controller", 00:07:19.390 "bdev_virtio_blk_set_hotplug", 00:07:19.390 "bdev_iscsi_delete", 00:07:19.390 "bdev_iscsi_create", 00:07:19.390 "bdev_iscsi_set_options", 00:07:19.390 "accel_error_inject_error", 00:07:19.390 "ioat_scan_accel_module", 00:07:19.390 "dsa_scan_accel_module", 00:07:19.390 "iaa_scan_accel_module", 00:07:19.390 "vfu_virtio_create_scsi_endpoint", 00:07:19.390 "vfu_virtio_scsi_remove_target", 00:07:19.390 "vfu_virtio_scsi_add_target", 00:07:19.390 "vfu_virtio_create_blk_endpoint", 00:07:19.390 "vfu_virtio_delete_endpoint", 00:07:19.390 "keyring_file_remove_key", 00:07:19.390 "keyring_file_add_key", 00:07:19.390 "keyring_linux_set_options", 00:07:19.390 "iscsi_get_histogram", 00:07:19.390 "iscsi_enable_histogram", 00:07:19.390 "iscsi_set_options", 00:07:19.390 "iscsi_get_auth_groups", 00:07:19.390 "iscsi_auth_group_remove_secret", 00:07:19.390 "iscsi_auth_group_add_secret", 00:07:19.390 "iscsi_delete_auth_group", 00:07:19.390 "iscsi_create_auth_group", 00:07:19.390 "iscsi_set_discovery_auth", 00:07:19.390 "iscsi_get_options", 00:07:19.390 "iscsi_target_node_request_logout", 00:07:19.390 "iscsi_target_node_set_redirect", 00:07:19.390 "iscsi_target_node_set_auth", 00:07:19.390 "iscsi_target_node_add_lun", 00:07:19.390 "iscsi_get_stats", 00:07:19.390 "iscsi_get_connections", 00:07:19.390 "iscsi_portal_group_set_auth", 00:07:19.390 "iscsi_start_portal_group", 00:07:19.390 "iscsi_delete_portal_group", 00:07:19.390 "iscsi_create_portal_group", 00:07:19.390 "iscsi_get_portal_groups", 00:07:19.390 "iscsi_delete_target_node", 00:07:19.390 "iscsi_target_node_remove_pg_ig_maps", 00:07:19.390 "iscsi_target_node_add_pg_ig_maps", 00:07:19.390 "iscsi_create_target_node", 00:07:19.390 "iscsi_get_target_nodes", 00:07:19.390 "iscsi_delete_initiator_group", 00:07:19.390 "iscsi_initiator_group_remove_initiators", 00:07:19.390 "iscsi_initiator_group_add_initiators", 00:07:19.390 "iscsi_create_initiator_group", 00:07:19.390 "iscsi_get_initiator_groups", 00:07:19.390 "nvmf_set_crdt", 00:07:19.390 "nvmf_set_config", 00:07:19.390 "nvmf_set_max_subsystems", 00:07:19.390 "nvmf_stop_mdns_prr", 00:07:19.390 "nvmf_publish_mdns_prr", 00:07:19.390 "nvmf_subsystem_get_listeners", 00:07:19.390 "nvmf_subsystem_get_qpairs", 00:07:19.390 "nvmf_subsystem_get_controllers", 00:07:19.390 "nvmf_get_stats", 00:07:19.390 "nvmf_get_transports", 00:07:19.390 "nvmf_create_transport", 00:07:19.390 "nvmf_get_targets", 00:07:19.390 "nvmf_delete_target", 00:07:19.390 "nvmf_create_target", 00:07:19.390 "nvmf_subsystem_allow_any_host", 00:07:19.390 "nvmf_subsystem_remove_host", 00:07:19.390 "nvmf_subsystem_add_host", 00:07:19.390 "nvmf_ns_remove_host", 00:07:19.390 "nvmf_ns_add_host", 00:07:19.390 "nvmf_subsystem_remove_ns", 00:07:19.390 "nvmf_subsystem_add_ns", 00:07:19.390 "nvmf_subsystem_listener_set_ana_state", 00:07:19.390 "nvmf_discovery_get_referrals", 00:07:19.390 "nvmf_discovery_remove_referral", 00:07:19.390 "nvmf_discovery_add_referral", 00:07:19.390 "nvmf_subsystem_remove_listener", 00:07:19.390 "nvmf_subsystem_add_listener", 00:07:19.390 "nvmf_delete_subsystem", 00:07:19.390 "nvmf_create_subsystem", 00:07:19.390 "nvmf_get_subsystems", 00:07:19.390 "env_dpdk_get_mem_stats", 00:07:19.390 "nbd_get_disks", 00:07:19.390 "nbd_stop_disk", 00:07:19.390 "nbd_start_disk", 00:07:19.390 "ublk_recover_disk", 00:07:19.390 "ublk_get_disks", 00:07:19.390 "ublk_stop_disk", 00:07:19.390 "ublk_start_disk", 00:07:19.390 "ublk_destroy_target", 00:07:19.390 "ublk_create_target", 00:07:19.390 "virtio_blk_create_transport", 00:07:19.390 "virtio_blk_get_transports", 00:07:19.390 "vhost_controller_set_coalescing", 00:07:19.390 "vhost_get_controllers", 00:07:19.390 "vhost_delete_controller", 00:07:19.390 "vhost_create_blk_controller", 00:07:19.390 "vhost_scsi_controller_remove_target", 00:07:19.390 "vhost_scsi_controller_add_target", 00:07:19.390 "vhost_start_scsi_controller", 00:07:19.390 "vhost_create_scsi_controller", 00:07:19.390 "thread_set_cpumask", 00:07:19.390 "scheduler_set_options", 00:07:19.390 "framework_get_governor", 00:07:19.390 "framework_get_scheduler", 00:07:19.390 "framework_set_scheduler", 00:07:19.390 "framework_get_reactors", 00:07:19.390 "thread_get_io_channels", 00:07:19.390 "thread_get_pollers", 00:07:19.390 "thread_get_stats", 00:07:19.390 "framework_monitor_context_switch", 00:07:19.390 "spdk_kill_instance", 00:07:19.390 "log_enable_timestamps", 00:07:19.390 "log_get_flags", 00:07:19.390 "log_clear_flag", 00:07:19.390 "log_set_flag", 00:07:19.390 "log_get_level", 00:07:19.390 "log_set_level", 00:07:19.390 "log_get_print_level", 00:07:19.390 "log_set_print_level", 00:07:19.390 "framework_enable_cpumask_locks", 00:07:19.390 "framework_disable_cpumask_locks", 00:07:19.390 "framework_wait_init", 00:07:19.390 "framework_start_init", 00:07:19.390 "scsi_get_devices", 00:07:19.390 "bdev_get_histogram", 00:07:19.390 "bdev_enable_histogram", 00:07:19.390 "bdev_set_qos_limit", 00:07:19.390 "bdev_set_qd_sampling_period", 00:07:19.390 "bdev_get_bdevs", 00:07:19.390 "bdev_reset_iostat", 00:07:19.390 "bdev_get_iostat", 00:07:19.390 "bdev_examine", 00:07:19.390 "bdev_wait_for_examine", 00:07:19.390 "bdev_set_options", 00:07:19.390 "notify_get_notifications", 00:07:19.390 "notify_get_types", 00:07:19.390 "accel_get_stats", 00:07:19.390 "accel_set_options", 00:07:19.390 "accel_set_driver", 00:07:19.390 "accel_crypto_key_destroy", 00:07:19.390 "accel_crypto_keys_get", 00:07:19.390 "accel_crypto_key_create", 00:07:19.390 "accel_assign_opc", 00:07:19.390 "accel_get_module_info", 00:07:19.390 "accel_get_opc_assignments", 00:07:19.390 "vmd_rescan", 00:07:19.390 "vmd_remove_device", 00:07:19.390 "vmd_enable", 00:07:19.390 "sock_get_default_impl", 00:07:19.390 "sock_set_default_impl", 00:07:19.390 "sock_impl_set_options", 00:07:19.390 "sock_impl_get_options", 00:07:19.390 "iobuf_get_stats", 00:07:19.390 "iobuf_set_options", 00:07:19.390 "keyring_get_keys", 00:07:19.390 "framework_get_pci_devices", 00:07:19.390 "framework_get_config", 00:07:19.390 "framework_get_subsystems", 00:07:19.390 "vfu_tgt_set_base_path", 00:07:19.390 "trace_get_info", 00:07:19.390 "trace_get_tpoint_group_mask", 00:07:19.390 "trace_disable_tpoint_group", 00:07:19.390 "trace_enable_tpoint_group", 00:07:19.390 "trace_clear_tpoint_mask", 00:07:19.390 "trace_set_tpoint_mask", 00:07:19.390 "spdk_get_version", 00:07:19.390 "rpc_get_methods" 00:07:19.390 ] 00:07:19.390 07:13:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:19.390 07:13:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.390 07:13:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.390 07:13:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:19.390 07:13:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4082601 00:07:19.390 07:13:26 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 4082601 ']' 00:07:19.390 07:13:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 4082601 00:07:19.390 07:13:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:19.390 07:13:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.390 07:13:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4082601 00:07:19.391 07:13:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.391 07:13:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.391 07:13:26 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4082601' 00:07:19.391 killing process with pid 4082601 00:07:19.391 07:13:26 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 4082601 00:07:19.391 07:13:26 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 4082601 00:07:19.652 00:07:19.652 real 0m1.381s 00:07:19.652 user 0m2.517s 00:07:19.652 sys 0m0.405s 00:07:19.652 07:13:26 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.652 07:13:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.652 ************************************ 00:07:19.652 END TEST spdkcli_tcp 00:07:19.652 ************************************ 00:07:19.652 07:13:26 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:19.652 07:13:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.652 07:13:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.652 07:13:26 -- common/autotest_common.sh@10 -- # set +x 00:07:19.652 ************************************ 00:07:19.652 START TEST dpdk_mem_utility 00:07:19.652 ************************************ 00:07:19.652 07:13:26 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:19.652 * Looking for test storage... 00:07:19.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:19.913 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:19.913 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4082888 00:07:19.913 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4082888 00:07:19.913 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:19.913 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 4082888 ']' 00:07:19.913 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.914 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.914 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.914 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.914 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:19.914 [2024-07-25 07:13:27.085292] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:19.914 [2024-07-25 07:13:27.085357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082888 ] 00:07:19.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.914 [2024-07-25 07:13:27.149491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.914 [2024-07-25 07:13:27.225953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.485 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.485 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:20.485 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:20.485 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:20.485 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.485 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:20.747 { 00:07:20.747 "filename": "/tmp/spdk_mem_dump.txt" 00:07:20.747 } 00:07:20.747 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.747 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:20.747 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:20.747 1 heaps totaling size 814.000000 MiB 00:07:20.747 size: 814.000000 MiB heap id: 0 00:07:20.747 end heaps---------- 00:07:20.747 8 mempools totaling size 598.116089 MiB 00:07:20.747 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:20.747 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:20.747 size: 84.521057 MiB name: bdev_io_4082888 00:07:20.747 size: 51.011292 MiB name: evtpool_4082888 00:07:20.747 size: 50.003479 MiB name: msgpool_4082888 00:07:20.747 size: 21.763794 MiB name: PDU_Pool 00:07:20.747 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:20.747 size: 0.026123 MiB name: Session_Pool 00:07:20.747 end mempools------- 00:07:20.747 6 memzones totaling size 4.142822 MiB 00:07:20.747 size: 1.000366 MiB name: RG_ring_0_4082888 00:07:20.747 size: 1.000366 MiB name: RG_ring_1_4082888 00:07:20.747 size: 1.000366 MiB name: RG_ring_4_4082888 00:07:20.747 size: 1.000366 MiB name: RG_ring_5_4082888 00:07:20.747 size: 0.125366 MiB name: RG_ring_2_4082888 00:07:20.747 size: 0.015991 MiB name: RG_ring_3_4082888 00:07:20.747 end memzones------- 00:07:20.747 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:20.747 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:20.747 list of free elements. size: 12.519348 MiB 00:07:20.747 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:20.747 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:20.747 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:20.747 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:20.747 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:20.747 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:20.747 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:20.747 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:20.747 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:20.747 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:20.747 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:20.747 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:20.747 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:20.747 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:20.747 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:20.747 list of standard malloc elements. size: 199.218079 MiB 00:07:20.747 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:20.747 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:20.747 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:20.747 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:20.747 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:20.747 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:20.747 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:20.747 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:20.747 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:20.747 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:20.747 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:20.747 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:20.747 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:20.747 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:20.747 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:20.747 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:20.747 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:20.747 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:20.747 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:20.747 list of memzone associated elements. size: 602.262573 MiB 00:07:20.747 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:20.747 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:20.747 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:20.747 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:20.747 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:20.747 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4082888_0 00:07:20.747 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:20.747 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4082888_0 00:07:20.747 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:20.747 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4082888_0 00:07:20.747 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:20.747 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:20.747 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:20.747 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:20.747 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:20.747 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4082888 00:07:20.747 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:20.747 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4082888 00:07:20.747 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:20.747 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4082888 00:07:20.747 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:20.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:20.747 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:20.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:20.747 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:20.747 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:20.747 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:20.747 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:20.747 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:20.747 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4082888 00:07:20.747 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:20.748 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4082888 00:07:20.748 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:20.748 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4082888 00:07:20.748 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:20.748 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4082888 00:07:20.748 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:20.748 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4082888 00:07:20.748 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:20.748 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:20.748 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:20.748 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:20.748 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:20.748 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:20.748 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:20.748 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4082888 00:07:20.748 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:20.748 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:20.748 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:20.748 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:20.748 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:20.748 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4082888 00:07:20.748 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:20.748 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:20.748 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:20.748 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4082888 00:07:20.748 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:20.748 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4082888 00:07:20.748 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:20.748 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:20.748 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:20.748 07:13:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4082888 00:07:20.748 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 4082888 ']' 00:07:20.748 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 4082888 00:07:20.748 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:20.748 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.748 07:13:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4082888 00:07:20.748 07:13:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.748 07:13:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.748 07:13:28 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4082888' 00:07:20.748 killing process with pid 4082888 00:07:20.748 07:13:28 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 4082888 00:07:20.748 07:13:28 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 4082888 00:07:21.009 00:07:21.009 real 0m1.291s 00:07:21.009 user 0m1.347s 00:07:21.009 sys 0m0.388s 00:07:21.009 07:13:28 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.009 07:13:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.009 ************************************ 00:07:21.009 END TEST dpdk_mem_utility 00:07:21.009 ************************************ 00:07:21.009 07:13:28 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:21.009 07:13:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.009 07:13:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.009 07:13:28 -- common/autotest_common.sh@10 -- # set +x 00:07:21.009 ************************************ 00:07:21.009 START TEST event 00:07:21.009 ************************************ 00:07:21.009 07:13:28 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:21.270 * Looking for test storage... 00:07:21.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:21.270 07:13:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:21.270 07:13:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:21.270 07:13:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:21.270 07:13:28 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:21.270 07:13:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.270 07:13:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.270 ************************************ 00:07:21.270 START TEST event_perf 00:07:21.270 ************************************ 00:07:21.270 07:13:28 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:21.270 Running I/O for 1 seconds...[2024-07-25 07:13:28.452824] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:21.270 [2024-07-25 07:13:28.452924] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083258 ] 00:07:21.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.270 [2024-07-25 07:13:28.520673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.270 [2024-07-25 07:13:28.597399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.270 [2024-07-25 07:13:28.597518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.270 [2024-07-25 07:13:28.597681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.270 Running I/O for 1 seconds...[2024-07-25 07:13:28.597682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.654 00:07:22.654 lcore 0: 180747 00:07:22.654 lcore 1: 180748 00:07:22.654 lcore 2: 180745 00:07:22.654 lcore 3: 180749 00:07:22.654 done. 00:07:22.654 00:07:22.654 real 0m1.221s 00:07:22.654 user 0m4.138s 00:07:22.654 sys 0m0.078s 00:07:22.654 07:13:29 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.654 07:13:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.654 ************************************ 00:07:22.654 END TEST event_perf 00:07:22.654 ************************************ 00:07:22.654 07:13:29 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:22.654 07:13:29 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:22.654 07:13:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.654 07:13:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.654 ************************************ 00:07:22.654 START TEST event_reactor 00:07:22.654 ************************************ 00:07:22.654 07:13:29 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:22.654 [2024-07-25 07:13:29.742032] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:22.654 [2024-07-25 07:13:29.742132] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083618 ] 00:07:22.654 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.654 [2024-07-25 07:13:29.804275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.654 [2024-07-25 07:13:29.869549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.596 test_start 00:07:23.596 oneshot 00:07:23.596 tick 100 00:07:23.596 tick 100 00:07:23.596 tick 250 00:07:23.596 tick 100 00:07:23.596 tick 100 00:07:23.596 tick 100 00:07:23.596 tick 250 00:07:23.596 tick 500 00:07:23.596 tick 100 00:07:23.596 tick 100 00:07:23.596 tick 250 00:07:23.596 tick 100 00:07:23.596 tick 100 00:07:23.596 test_end 00:07:23.596 00:07:23.596 real 0m1.202s 00:07:23.596 user 0m1.129s 00:07:23.596 sys 0m0.069s 00:07:23.596 07:13:30 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.596 07:13:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:23.596 ************************************ 00:07:23.596 END TEST event_reactor 00:07:23.596 ************************************ 00:07:23.596 07:13:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:23.596 07:13:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:23.596 07:13:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.596 07:13:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.857 ************************************ 00:07:23.857 START TEST event_reactor_perf 00:07:23.857 ************************************ 00:07:23.857 07:13:30 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:23.857 [2024-07-25 07:13:31.013994] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:23.857 [2024-07-25 07:13:31.014106] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4083967 ] 00:07:23.857 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.857 [2024-07-25 07:13:31.085722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.857 [2024-07-25 07:13:31.150883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.242 test_start 00:07:25.242 test_end 00:07:25.242 Performance: 369294 events per second 00:07:25.242 00:07:25.242 real 0m1.211s 00:07:25.242 user 0m1.131s 00:07:25.242 sys 0m0.076s 00:07:25.242 07:13:32 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.242 07:13:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.242 ************************************ 00:07:25.242 END TEST event_reactor_perf 00:07:25.242 ************************************ 00:07:25.242 07:13:32 event -- event/event.sh@49 -- # uname -s 00:07:25.242 07:13:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:25.242 07:13:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:25.242 07:13:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.242 07:13:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.242 07:13:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.242 ************************************ 00:07:25.242 START TEST event_scheduler 00:07:25.242 ************************************ 00:07:25.242 07:13:32 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:25.242 * Looking for test storage... 00:07:25.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:25.242 07:13:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:25.242 07:13:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4084179 00:07:25.242 07:13:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:25.242 07:13:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:25.242 07:13:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4084179 00:07:25.242 07:13:32 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 4084179 ']' 00:07:25.242 07:13:32 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.242 07:13:32 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.242 07:13:32 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.242 07:13:32 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.242 07:13:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.242 [2024-07-25 07:13:32.428919] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:25.242 [2024-07-25 07:13:32.428989] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4084179 ] 00:07:25.242 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.242 [2024-07-25 07:13:32.486729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.242 [2024-07-25 07:13:32.552756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.242 [2024-07-25 07:13:32.552913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.242 [2024-07-25 07:13:32.553009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.242 [2024-07-25 07:13:32.553012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:26.184 07:13:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 [2024-07-25 07:13:33.223223] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:26.184 [2024-07-25 07:13:33.223239] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:26.184 [2024-07-25 07:13:33.223249] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:26.184 [2024-07-25 07:13:33.223255] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:26.184 [2024-07-25 07:13:33.223261] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 [2024-07-25 07:13:33.281498] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 ************************************ 00:07:26.184 START TEST scheduler_create_thread 00:07:26.184 ************************************ 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 2 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 3 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 4 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 5 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 6 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 7 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 8 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.184 9 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.184 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.754 10 00:07:26.754 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.754 07:13:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:26.754 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.754 07:13:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.138 07:13:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.138 07:13:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:28.138 07:13:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:28.138 07:13:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.138 07:13:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.711 07:13:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.711 07:13:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:28.711 07:13:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.711 07:13:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.653 07:13:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.653 07:13:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:29.653 07:13:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:29.653 07:13:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.653 07:13:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.226 07:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.226 00:07:30.226 real 0m4.222s 00:07:30.226 user 0m0.027s 00:07:30.226 sys 0m0.004s 00:07:30.226 07:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.226 07:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.226 ************************************ 00:07:30.226 END TEST scheduler_create_thread 00:07:30.226 ************************************ 00:07:30.226 07:13:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:30.226 07:13:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4084179 00:07:30.226 07:13:37 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 4084179 ']' 00:07:30.226 07:13:37 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 4084179 00:07:30.226 07:13:37 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:30.226 07:13:37 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.226 07:13:37 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4084179 00:07:30.487 07:13:37 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:30.487 07:13:37 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:30.487 07:13:37 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4084179' 00:07:30.487 killing process with pid 4084179 00:07:30.487 07:13:37 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 4084179 00:07:30.487 07:13:37 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 4084179 00:07:30.487 [2024-07-25 07:13:37.822744] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:30.747 00:07:30.747 real 0m5.716s 00:07:30.747 user 0m12.773s 00:07:30.747 sys 0m0.361s 00:07:30.747 07:13:37 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.747 07:13:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.747 ************************************ 00:07:30.747 END TEST event_scheduler 00:07:30.747 ************************************ 00:07:30.747 07:13:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:30.747 07:13:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:30.747 07:13:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.747 07:13:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.747 07:13:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.747 ************************************ 00:07:30.747 START TEST app_repeat 00:07:30.747 ************************************ 00:07:30.747 07:13:38 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4085412 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4085412' 00:07:30.747 Process app_repeat pid: 4085412 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:30.747 spdk_app_start Round 0 00:07:30.747 07:13:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4085412 /var/tmp/spdk-nbd.sock 00:07:30.747 07:13:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4085412 ']' 00:07:30.747 07:13:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.747 07:13:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.747 07:13:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.747 07:13:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.747 07:13:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:30.747 [2024-07-25 07:13:38.112308] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:30.747 [2024-07-25 07:13:38.112446] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085412 ] 00:07:31.007 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.007 [2024-07-25 07:13:38.173920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.007 [2024-07-25 07:13:38.238052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.007 [2024-07-25 07:13:38.238054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.580 07:13:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.580 07:13:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:31.580 07:13:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.841 Malloc0 00:07:31.841 07:13:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.841 Malloc1 00:07:32.103 07:13:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.103 /dev/nbd0 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.103 1+0 records in 00:07:32.103 1+0 records out 00:07:32.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201166 s, 20.4 MB/s 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.103 07:13:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.103 07:13:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.376 /dev/nbd1 00:07:32.376 07:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.376 07:13:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.376 1+0 records in 00:07:32.376 1+0 records out 00:07:32.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216083 s, 19.0 MB/s 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:32.376 07:13:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:32.376 07:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.376 07:13:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.376 07:13:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.376 07:13:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.376 07:13:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.694 { 00:07:32.694 "nbd_device": "/dev/nbd0", 00:07:32.694 "bdev_name": "Malloc0" 00:07:32.694 }, 00:07:32.694 { 00:07:32.694 "nbd_device": "/dev/nbd1", 00:07:32.694 "bdev_name": "Malloc1" 00:07:32.694 } 00:07:32.694 ]' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.694 { 00:07:32.694 "nbd_device": "/dev/nbd0", 00:07:32.694 "bdev_name": "Malloc0" 00:07:32.694 }, 00:07:32.694 { 00:07:32.694 "nbd_device": "/dev/nbd1", 00:07:32.694 "bdev_name": "Malloc1" 00:07:32.694 } 00:07:32.694 ]' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.694 /dev/nbd1' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.694 /dev/nbd1' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.694 256+0 records in 00:07:32.694 256+0 records out 00:07:32.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119655 s, 87.6 MB/s 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:32.694 256+0 records in 00:07:32.694 256+0 records out 00:07:32.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0384276 s, 27.3 MB/s 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:32.694 256+0 records in 00:07:32.694 256+0 records out 00:07:32.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167062 s, 62.8 MB/s 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.694 07:13:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.955 07:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.216 07:13:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.216 07:13:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.477 07:13:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:33.477 [2024-07-25 07:13:40.769179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.477 [2024-07-25 07:13:40.833893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.477 [2024-07-25 07:13:40.833896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.738 [2024-07-25 07:13:40.865401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:33.738 [2024-07-25 07:13:40.865438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.285 07:13:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:36.285 07:13:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:36.285 spdk_app_start Round 1 00:07:36.285 07:13:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4085412 /var/tmp/spdk-nbd.sock 00:07:36.285 07:13:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4085412 ']' 00:07:36.285 07:13:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.285 07:13:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.285 07:13:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.285 07:13:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.285 07:13:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.545 07:13:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.545 07:13:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:36.545 07:13:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:36.806 Malloc0 00:07:36.806 07:13:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:36.806 Malloc1 00:07:36.806 07:13:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.806 07:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:37.066 /dev/nbd0 00:07:37.066 07:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.066 07:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.066 1+0 records in 00:07:37.066 1+0 records out 00:07:37.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239091 s, 17.1 MB/s 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.066 07:13:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:37.066 07:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.066 07:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.066 07:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:37.327 /dev/nbd1 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.327 1+0 records in 00:07:37.327 1+0 records out 00:07:37.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002941 s, 13.9 MB/s 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.327 07:13:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.327 { 00:07:37.327 "nbd_device": "/dev/nbd0", 00:07:37.327 "bdev_name": "Malloc0" 00:07:37.327 }, 00:07:37.327 { 00:07:37.327 "nbd_device": "/dev/nbd1", 00:07:37.327 "bdev_name": "Malloc1" 00:07:37.327 } 00:07:37.327 ]' 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.327 { 00:07:37.327 "nbd_device": "/dev/nbd0", 00:07:37.327 "bdev_name": "Malloc0" 00:07:37.327 }, 00:07:37.327 { 00:07:37.327 "nbd_device": "/dev/nbd1", 00:07:37.327 "bdev_name": "Malloc1" 00:07:37.327 } 00:07:37.327 ]' 00:07:37.327 07:13:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:37.588 /dev/nbd1' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:37.588 /dev/nbd1' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:37.588 256+0 records in 00:07:37.588 256+0 records out 00:07:37.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124459 s, 84.3 MB/s 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:37.588 256+0 records in 00:07:37.588 256+0 records out 00:07:37.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159759 s, 65.6 MB/s 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:37.588 256+0 records in 00:07:37.588 256+0 records out 00:07:37.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164998 s, 63.6 MB/s 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.588 07:13:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.849 07:13:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.849 07:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:38.110 07:13:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:38.110 07:13:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:38.371 07:13:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:38.371 [2024-07-25 07:13:45.671373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.371 [2024-07-25 07:13:45.736190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.371 [2024-07-25 07:13:45.736194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.631 [2024-07-25 07:13:45.768542] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:38.631 [2024-07-25 07:13:45.768578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:41.175 07:13:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:41.436 07:13:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:41.436 spdk_app_start Round 2 00:07:41.436 07:13:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4085412 /var/tmp/spdk-nbd.sock 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4085412 ']' 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:41.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.436 07:13:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:41.436 07:13:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.697 Malloc0 00:07:41.697 07:13:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:41.697 Malloc1 00:07:41.697 07:13:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.698 07:13:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:41.959 /dev/nbd0 00:07:41.959 07:13:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.959 07:13:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.959 1+0 records in 00:07:41.959 1+0 records out 00:07:41.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020245 s, 20.2 MB/s 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.959 07:13:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:41.959 07:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.959 07:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.959 07:13:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:42.221 /dev/nbd1 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.221 1+0 records in 00:07:42.221 1+0 records out 00:07:42.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243827 s, 16.8 MB/s 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.221 07:13:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:42.221 { 00:07:42.221 "nbd_device": "/dev/nbd0", 00:07:42.221 "bdev_name": "Malloc0" 00:07:42.221 }, 00:07:42.221 { 00:07:42.221 "nbd_device": "/dev/nbd1", 00:07:42.221 "bdev_name": "Malloc1" 00:07:42.221 } 00:07:42.221 ]' 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:42.221 { 00:07:42.221 "nbd_device": "/dev/nbd0", 00:07:42.221 "bdev_name": "Malloc0" 00:07:42.221 }, 00:07:42.221 { 00:07:42.221 "nbd_device": "/dev/nbd1", 00:07:42.221 "bdev_name": "Malloc1" 00:07:42.221 } 00:07:42.221 ]' 00:07:42.221 07:13:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:42.483 /dev/nbd1' 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:42.483 /dev/nbd1' 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:42.483 07:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:42.484 256+0 records in 00:07:42.484 256+0 records out 00:07:42.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122155 s, 85.8 MB/s 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:42.484 256+0 records in 00:07:42.484 256+0 records out 00:07:42.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258442 s, 40.6 MB/s 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:42.484 256+0 records in 00:07:42.484 256+0 records out 00:07:42.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157859 s, 66.4 MB/s 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.484 07:13:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.746 07:13:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.746 07:13:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:43.008 07:13:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:43.008 07:13:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:43.269 07:13:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:43.269 [2024-07-25 07:13:50.558305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.269 [2024-07-25 07:13:50.622250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.269 [2024-07-25 07:13:50.622274] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.531 [2024-07-25 07:13:50.653581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:43.531 [2024-07-25 07:13:50.653614] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:46.089 07:13:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4085412 /var/tmp/spdk-nbd.sock 00:07:46.089 07:13:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4085412 ']' 00:07:46.089 07:13:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.089 07:13:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.089 07:13:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.089 07:13:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.089 07:13:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:46.357 07:13:53 event.app_repeat -- event/event.sh@39 -- # killprocess 4085412 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 4085412 ']' 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 4085412 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4085412 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4085412' 00:07:46.357 killing process with pid 4085412 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@969 -- # kill 4085412 00:07:46.357 07:13:53 event.app_repeat -- common/autotest_common.sh@974 -- # wait 4085412 00:07:46.619 spdk_app_start is called in Round 0. 00:07:46.619 Shutdown signal received, stop current app iteration 00:07:46.619 Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 reinitialization... 00:07:46.619 spdk_app_start is called in Round 1. 00:07:46.619 Shutdown signal received, stop current app iteration 00:07:46.619 Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 reinitialization... 00:07:46.619 spdk_app_start is called in Round 2. 00:07:46.619 Shutdown signal received, stop current app iteration 00:07:46.619 Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 reinitialization... 00:07:46.619 spdk_app_start is called in Round 3. 00:07:46.619 Shutdown signal received, stop current app iteration 00:07:46.619 07:13:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:46.619 07:13:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:46.619 00:07:46.619 real 0m15.681s 00:07:46.619 user 0m33.827s 00:07:46.619 sys 0m2.111s 00:07:46.619 07:13:53 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.619 07:13:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.619 ************************************ 00:07:46.619 END TEST app_repeat 00:07:46.619 ************************************ 00:07:46.619 07:13:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:46.619 07:13:53 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:46.619 07:13:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.619 07:13:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.619 07:13:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.619 ************************************ 00:07:46.619 START TEST cpu_locks 00:07:46.619 ************************************ 00:07:46.619 07:13:53 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:46.619 * Looking for test storage... 00:07:46.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:46.619 07:13:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:46.619 07:13:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:46.619 07:13:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:46.619 07:13:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:46.619 07:13:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.619 07:13:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.619 07:13:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.619 ************************************ 00:07:46.619 START TEST default_locks 00:07:46.619 ************************************ 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4088670 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4088670 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 4088670 ']' 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.619 07:13:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.880 [2024-07-25 07:13:54.020159] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:46.880 [2024-07-25 07:13:54.020219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4088670 ] 00:07:46.880 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.880 [2024-07-25 07:13:54.081815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.880 [2024-07-25 07:13:54.153747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.453 07:13:54 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.453 07:13:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:47.453 07:13:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4088670 00:07:47.453 07:13:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.453 07:13:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4088670 00:07:48.023 lslocks: write error 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4088670 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 4088670 ']' 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 4088670 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4088670 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4088670' 00:07:48.023 killing process with pid 4088670 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 4088670 00:07:48.023 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 4088670 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4088670 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4088670 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 4088670 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 4088670 ']' 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4088670) - No such process 00:07:48.284 ERROR: process (pid: 4088670) is no longer running 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:48.284 00:07:48.284 real 0m1.505s 00:07:48.284 user 0m1.566s 00:07:48.284 sys 0m0.526s 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.284 07:13:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.284 ************************************ 00:07:48.284 END TEST default_locks 00:07:48.284 ************************************ 00:07:48.284 07:13:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:48.284 07:13:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.284 07:13:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.284 07:13:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.284 ************************************ 00:07:48.284 START TEST default_locks_via_rpc 00:07:48.284 ************************************ 00:07:48.284 07:13:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4089038 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4089038 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4089038 ']' 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.285 07:13:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.285 [2024-07-25 07:13:55.591750] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:48.285 [2024-07-25 07:13:55.591798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089038 ] 00:07:48.285 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.285 [2024-07-25 07:13:55.650356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.545 [2024-07-25 07:13:55.716975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4089038 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4089038 00:07:49.117 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4089038 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 4089038 ']' 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 4089038 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4089038 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.731 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.732 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4089038' 00:07:49.732 killing process with pid 4089038 00:07:49.732 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 4089038 00:07:49.732 07:13:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 4089038 00:07:49.732 00:07:49.732 real 0m1.511s 00:07:49.732 user 0m1.607s 00:07:49.732 sys 0m0.506s 00:07:49.732 07:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.732 07:13:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.732 ************************************ 00:07:49.732 END TEST default_locks_via_rpc 00:07:49.732 ************************************ 00:07:49.732 07:13:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:49.732 07:13:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.732 07:13:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.732 07:13:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.992 ************************************ 00:07:49.992 START TEST non_locking_app_on_locked_coremask 00:07:49.992 ************************************ 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4089401 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4089401 /var/tmp/spdk.sock 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4089401 ']' 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.992 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.992 [2024-07-25 07:13:57.180062] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:49.992 [2024-07-25 07:13:57.180111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089401 ] 00:07:49.992 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.992 [2024-07-25 07:13:57.239460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.992 [2024-07-25 07:13:57.308004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.563 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.563 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:50.563 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4089608 00:07:50.563 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4089608 /var/tmp/spdk2.sock 00:07:50.563 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4089608 ']' 00:07:50.564 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.564 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.564 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.564 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.564 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.564 07:13:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:50.824 [2024-07-25 07:13:57.973060] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:50.824 [2024-07-25 07:13:57.973116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4089608 ] 00:07:50.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.824 [2024-07-25 07:13:58.062130] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.824 [2024-07-25 07:13:58.062160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.824 [2024-07-25 07:13:58.191686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.396 07:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.396 07:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:51.396 07:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4089401 00:07:51.396 07:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4089401 00:07:51.396 07:13:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.968 lslocks: write error 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4089401 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4089401 ']' 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4089401 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4089401 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4089401' 00:07:51.968 killing process with pid 4089401 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4089401 00:07:51.968 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4089401 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4089608 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4089608 ']' 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4089608 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4089608 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4089608' 00:07:52.229 killing process with pid 4089608 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4089608 00:07:52.229 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4089608 00:07:52.489 00:07:52.489 real 0m2.680s 00:07:52.489 user 0m2.943s 00:07:52.489 sys 0m0.767s 00:07:52.490 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.490 07:13:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.490 ************************************ 00:07:52.490 END TEST non_locking_app_on_locked_coremask 00:07:52.490 ************************************ 00:07:52.490 07:13:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:52.490 07:13:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.490 07:13:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.490 07:13:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.751 ************************************ 00:07:52.751 START TEST locking_app_on_unlocked_coremask 00:07:52.751 ************************************ 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4090104 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4090104 /var/tmp/spdk.sock 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4090104 ']' 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.751 07:13:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.751 [2024-07-25 07:13:59.922567] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:52.751 [2024-07-25 07:13:59.922618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090104 ] 00:07:52.751 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.751 [2024-07-25 07:13:59.981596] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.751 [2024-07-25 07:13:59.981625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.751 [2024-07-25 07:14:00.046857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.324 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.324 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4090123 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4090123 /var/tmp/spdk2.sock 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4090123 ']' 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.585 07:14:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.585 [2024-07-25 07:14:00.719962] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:53.585 [2024-07-25 07:14:00.720007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090123 ] 00:07:53.585 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.585 [2024-07-25 07:14:00.802067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.585 [2024-07-25 07:14:00.931364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.158 07:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.158 07:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:54.158 07:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4090123 00:07:54.158 07:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4090123 00:07:54.158 07:14:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.731 lslocks: write error 00:07:54.731 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4090104 00:07:54.731 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4090104 ']' 00:07:54.731 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 4090104 00:07:54.731 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:54.731 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:54.731 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4090104 00:07:54.992 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:54.992 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:54.992 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4090104' 00:07:54.992 killing process with pid 4090104 00:07:54.992 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 4090104 00:07:54.992 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 4090104 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4090123 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4090123 ']' 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 4090123 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4090123 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4090123' 00:07:55.253 killing process with pid 4090123 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 4090123 00:07:55.253 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 4090123 00:07:55.514 00:07:55.514 real 0m2.950s 00:07:55.514 user 0m3.212s 00:07:55.514 sys 0m0.862s 00:07:55.514 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.514 07:14:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.514 ************************************ 00:07:55.514 END TEST locking_app_on_unlocked_coremask 00:07:55.514 ************************************ 00:07:55.514 07:14:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:55.514 07:14:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.514 07:14:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.514 07:14:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.775 ************************************ 00:07:55.775 START TEST locking_app_on_locked_coremask 00:07:55.775 ************************************ 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4090657 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4090657 /var/tmp/spdk.sock 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4090657 ']' 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.775 07:14:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:55.775 [2024-07-25 07:14:02.948741] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:55.775 [2024-07-25 07:14:02.948795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090657 ] 00:07:55.775 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.775 [2024-07-25 07:14:03.008838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.776 [2024-07-25 07:14:03.078108] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4090829 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4090829 /var/tmp/spdk2.sock 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4090829 /var/tmp/spdk2.sock 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4090829 /var/tmp/spdk2.sock 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4090829 ']' 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.347 07:14:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.607 [2024-07-25 07:14:03.748565] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:56.607 [2024-07-25 07:14:03.748618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4090829 ] 00:07:56.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.607 [2024-07-25 07:14:03.837137] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4090657 has claimed it. 00:07:56.607 [2024-07-25 07:14:03.837176] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:57.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4090829) - No such process 00:07:57.179 ERROR: process (pid: 4090829) is no longer running 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4090657 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4090657 00:07:57.179 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.440 lslocks: write error 00:07:57.440 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4090657 00:07:57.440 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4090657 ']' 00:07:57.440 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4090657 00:07:57.440 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:57.440 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.440 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4090657 00:07:57.701 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.701 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.702 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4090657' 00:07:57.702 killing process with pid 4090657 00:07:57.702 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4090657 00:07:57.702 07:14:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4090657 00:07:57.702 00:07:57.702 real 0m2.174s 00:07:57.702 user 0m2.422s 00:07:57.702 sys 0m0.579s 00:07:57.702 07:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.702 07:14:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.702 ************************************ 00:07:57.702 END TEST locking_app_on_locked_coremask 00:07:57.702 ************************************ 00:07:57.962 07:14:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:57.962 07:14:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.962 07:14:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.962 07:14:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:57.962 ************************************ 00:07:57.962 START TEST locking_overlapped_coremask 00:07:57.962 ************************************ 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4091189 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4091189 /var/tmp/spdk.sock 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 4091189 ']' 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.962 07:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:57.962 [2024-07-25 07:14:05.181535] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:57.962 [2024-07-25 07:14:05.181583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091189 ] 00:07:57.962 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.962 [2024-07-25 07:14:05.240277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.962 [2024-07-25 07:14:05.308567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.962 [2024-07-25 07:14:05.308686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.962 [2024-07-25 07:14:05.308689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4091221 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4091221 /var/tmp/spdk2.sock 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4091221 /var/tmp/spdk2.sock 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4091221 /var/tmp/spdk2.sock 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 4091221 ']' 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.906 07:14:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.906 [2024-07-25 07:14:06.012352] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:58.906 [2024-07-25 07:14:06.012408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091221 ] 00:07:58.906 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.906 [2024-07-25 07:14:06.085845] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4091189 has claimed it. 00:07:58.906 [2024-07-25 07:14:06.085878] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:59.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4091221) - No such process 00:07:59.479 ERROR: process (pid: 4091221) is no longer running 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4091189 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 4091189 ']' 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 4091189 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4091189 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4091189' 00:07:59.479 killing process with pid 4091189 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 4091189 00:07:59.479 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 4091189 00:07:59.741 00:07:59.741 real 0m1.751s 00:07:59.741 user 0m4.980s 00:07:59.741 sys 0m0.366s 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.741 ************************************ 00:07:59.741 END TEST locking_overlapped_coremask 00:07:59.741 ************************************ 00:07:59.741 07:14:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:59.741 07:14:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.741 07:14:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.741 07:14:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.741 ************************************ 00:07:59.741 START TEST locking_overlapped_coremask_via_rpc 00:07:59.741 ************************************ 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4091563 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4091563 /var/tmp/spdk.sock 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4091563 ']' 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.741 07:14:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.741 [2024-07-25 07:14:07.008577] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:07:59.741 [2024-07-25 07:14:07.008626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091563 ] 00:07:59.741 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.741 [2024-07-25 07:14:07.068013] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:59.741 [2024-07-25 07:14:07.068040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.002 [2024-07-25 07:14:07.135590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.002 [2024-07-25 07:14:07.135708] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.002 [2024-07-25 07:14:07.135711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4091661 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4091661 /var/tmp/spdk2.sock 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4091661 ']' 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.576 07:14:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.576 [2024-07-25 07:14:07.832794] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:08:00.576 [2024-07-25 07:14:07.832850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4091661 ] 00:08:00.576 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.576 [2024-07-25 07:14:07.904036] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.576 [2024-07-25 07:14:07.904062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.837 [2024-07-25 07:14:08.013865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.838 [2024-07-25 07:14:08.014022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.838 [2024-07-25 07:14:08.014023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.410 [2024-07-25 07:14:08.608260] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4091563 has claimed it. 00:08:01.410 request: 00:08:01.410 { 00:08:01.410 "method": "framework_enable_cpumask_locks", 00:08:01.410 "req_id": 1 00:08:01.410 } 00:08:01.410 Got JSON-RPC error response 00:08:01.410 response: 00:08:01.410 { 00:08:01.410 "code": -32603, 00:08:01.410 "message": "Failed to claim CPU core: 2" 00:08:01.410 } 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4091563 /var/tmp/spdk.sock 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4091563 ']' 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.410 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.411 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.411 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4091661 /var/tmp/spdk2.sock 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4091661 ']' 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:01.672 00:08:01.672 real 0m2.004s 00:08:01.672 user 0m0.778s 00:08:01.672 sys 0m0.155s 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.672 07:14:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.672 ************************************ 00:08:01.672 END TEST locking_overlapped_coremask_via_rpc 00:08:01.672 ************************************ 00:08:01.672 07:14:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:01.672 07:14:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4091563 ]] 00:08:01.672 07:14:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4091563 00:08:01.672 07:14:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4091563 ']' 00:08:01.672 07:14:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4091563 00:08:01.672 07:14:08 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:01.672 07:14:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.672 07:14:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4091563 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4091563' 00:08:01.933 killing process with pid 4091563 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 4091563 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 4091563 00:08:01.933 07:14:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4091661 ]] 00:08:01.933 07:14:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4091661 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4091661 ']' 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4091661 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.933 07:14:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4091661 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4091661' 00:08:02.195 killing process with pid 4091661 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 4091661 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 4091661 00:08:02.195 07:14:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.195 07:14:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:02.195 07:14:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4091563 ]] 00:08:02.195 07:14:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4091563 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4091563 ']' 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4091563 00:08:02.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4091563) - No such process 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 4091563 is not found' 00:08:02.195 Process with pid 4091563 is not found 00:08:02.195 07:14:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4091661 ]] 00:08:02.195 07:14:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4091661 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4091661 ']' 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4091661 00:08:02.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4091661) - No such process 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 4091661 is not found' 00:08:02.195 Process with pid 4091661 is not found 00:08:02.195 07:14:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.195 00:08:02.195 real 0m15.689s 00:08:02.195 user 0m27.098s 00:08:02.195 sys 0m4.594s 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.195 07:14:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.195 ************************************ 00:08:02.195 END TEST cpu_locks 00:08:02.195 ************************************ 00:08:02.195 00:08:02.195 real 0m41.268s 00:08:02.195 user 1m20.282s 00:08:02.195 sys 0m7.681s 00:08:02.195 07:14:09 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.195 07:14:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:02.195 ************************************ 00:08:02.195 END TEST event 00:08:02.195 ************************************ 00:08:02.457 07:14:09 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:02.457 07:14:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.457 07:14:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.457 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:08:02.457 ************************************ 00:08:02.457 START TEST thread 00:08:02.457 ************************************ 00:08:02.457 07:14:09 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:02.457 * Looking for test storage... 00:08:02.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:02.457 07:14:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:02.457 07:14:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:02.457 07:14:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.457 07:14:09 thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.457 ************************************ 00:08:02.457 START TEST thread_poller_perf 00:08:02.457 ************************************ 00:08:02.457 07:14:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:02.457 [2024-07-25 07:14:09.788818] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:08:02.457 [2024-07-25 07:14:09.788924] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092233 ] 00:08:02.457 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.718 [2024-07-25 07:14:09.856247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.718 [2024-07-25 07:14:09.931061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.718 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:03.661 ====================================== 00:08:03.661 busy:2409032912 (cyc) 00:08:03.661 total_run_count: 287000 00:08:03.661 tsc_hz: 2400000000 (cyc) 00:08:03.661 ====================================== 00:08:03.661 poller_cost: 8393 (cyc), 3497 (nsec) 00:08:03.661 00:08:03.661 real 0m1.227s 00:08:03.661 user 0m1.145s 00:08:03.661 sys 0m0.078s 00:08:03.661 07:14:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.661 07:14:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.661 ************************************ 00:08:03.661 END TEST thread_poller_perf 00:08:03.661 ************************************ 00:08:03.923 07:14:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:03.923 07:14:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:03.923 07:14:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.923 07:14:11 thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.923 ************************************ 00:08:03.923 START TEST thread_poller_perf 00:08:03.923 ************************************ 00:08:03.923 07:14:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:03.923 [2024-07-25 07:14:11.090415] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:08:03.923 [2024-07-25 07:14:11.090517] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092392 ] 00:08:03.923 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.923 [2024-07-25 07:14:11.152646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.923 [2024-07-25 07:14:11.219432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.923 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:04.935 ====================================== 00:08:04.935 busy:2401920806 (cyc) 00:08:04.935 total_run_count: 3809000 00:08:04.935 tsc_hz: 2400000000 (cyc) 00:08:04.935 ====================================== 00:08:04.935 poller_cost: 630 (cyc), 262 (nsec) 00:08:04.935 00:08:04.935 real 0m1.204s 00:08:04.935 user 0m1.130s 00:08:04.935 sys 0m0.070s 00:08:04.935 07:14:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.935 07:14:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.935 ************************************ 00:08:04.935 END TEST thread_poller_perf 00:08:04.935 ************************************ 00:08:05.197 07:14:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:05.197 00:08:05.197 real 0m2.685s 00:08:05.197 user 0m2.371s 00:08:05.197 sys 0m0.321s 00:08:05.197 07:14:12 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.197 07:14:12 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.197 ************************************ 00:08:05.197 END TEST thread 00:08:05.197 ************************************ 00:08:05.197 07:14:12 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:05.197 07:14:12 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:05.197 07:14:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.197 07:14:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.197 07:14:12 -- common/autotest_common.sh@10 -- # set +x 00:08:05.197 ************************************ 00:08:05.197 START TEST app_cmdline 00:08:05.197 ************************************ 00:08:05.197 07:14:12 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:05.197 * Looking for test storage... 00:08:05.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:05.197 07:14:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:05.197 07:14:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4092760 00:08:05.197 07:14:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4092760 00:08:05.197 07:14:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:05.197 07:14:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 4092760 ']' 00:08:05.197 07:14:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.197 07:14:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.197 07:14:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.197 07:14:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.197 07:14:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:05.197 [2024-07-25 07:14:12.557728] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:08:05.197 [2024-07-25 07:14:12.557806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4092760 ] 00:08:05.458 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.458 [2024-07-25 07:14:12.624042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.458 [2024-07-25 07:14:12.700693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.030 07:14:13 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.030 07:14:13 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:06.030 07:14:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:06.290 { 00:08:06.290 "version": "SPDK v24.09-pre git sha1 223450b47", 00:08:06.291 "fields": { 00:08:06.291 "major": 24, 00:08:06.291 "minor": 9, 00:08:06.291 "patch": 0, 00:08:06.291 "suffix": "-pre", 00:08:06.291 "commit": "223450b47" 00:08:06.291 } 00:08:06.291 } 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:06.291 07:14:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:06.291 07:14:13 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:06.552 request: 00:08:06.552 { 00:08:06.552 "method": "env_dpdk_get_mem_stats", 00:08:06.552 "req_id": 1 00:08:06.552 } 00:08:06.552 Got JSON-RPC error response 00:08:06.552 response: 00:08:06.552 { 00:08:06.552 "code": -32601, 00:08:06.552 "message": "Method not found" 00:08:06.552 } 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.552 07:14:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4092760 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 4092760 ']' 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 4092760 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4092760 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4092760' 00:08:06.552 killing process with pid 4092760 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 4092760 00:08:06.552 07:14:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 4092760 00:08:06.813 00:08:06.813 real 0m1.598s 00:08:06.813 user 0m1.932s 00:08:06.813 sys 0m0.418s 00:08:06.813 07:14:13 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.813 07:14:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.813 ************************************ 00:08:06.813 END TEST app_cmdline 00:08:06.813 ************************************ 00:08:06.813 07:14:14 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:06.813 07:14:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.813 07:14:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.813 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:06.813 ************************************ 00:08:06.813 START TEST version 00:08:06.813 ************************************ 00:08:06.813 07:14:14 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:06.813 * Looking for test storage... 00:08:06.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:06.813 07:14:14 version -- app/version.sh@17 -- # get_header_version major 00:08:06.813 07:14:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:06.813 07:14:14 version -- app/version.sh@14 -- # cut -f2 00:08:06.813 07:14:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.813 07:14:14 version -- app/version.sh@17 -- # major=24 00:08:06.813 07:14:14 version -- app/version.sh@18 -- # get_header_version minor 00:08:06.813 07:14:14 version -- app/version.sh@14 -- # cut -f2 00:08:06.813 07:14:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:06.813 07:14:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.075 07:14:14 version -- app/version.sh@18 -- # minor=9 00:08:07.075 07:14:14 version -- app/version.sh@19 -- # get_header_version patch 00:08:07.075 07:14:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:07.075 07:14:14 version -- app/version.sh@14 -- # cut -f2 00:08:07.075 07:14:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.075 07:14:14 version -- app/version.sh@19 -- # patch=0 00:08:07.075 07:14:14 version -- app/version.sh@20 -- # get_header_version suffix 00:08:07.075 07:14:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:07.075 07:14:14 version -- app/version.sh@14 -- # cut -f2 00:08:07.075 07:14:14 version -- app/version.sh@14 -- # tr -d '"' 00:08:07.075 07:14:14 version -- app/version.sh@20 -- # suffix=-pre 00:08:07.075 07:14:14 version -- app/version.sh@22 -- # version=24.9 00:08:07.075 07:14:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:07.075 07:14:14 version -- app/version.sh@28 -- # version=24.9rc0 00:08:07.075 07:14:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:07.075 07:14:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:07.075 07:14:14 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:07.075 07:14:14 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:07.075 00:08:07.075 real 0m0.179s 00:08:07.075 user 0m0.094s 00:08:07.075 sys 0m0.127s 00:08:07.075 07:14:14 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.075 07:14:14 version -- common/autotest_common.sh@10 -- # set +x 00:08:07.075 ************************************ 00:08:07.075 END TEST version 00:08:07.075 ************************************ 00:08:07.075 07:14:14 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@202 -- # uname -s 00:08:07.075 07:14:14 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:07.075 07:14:14 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:07.075 07:14:14 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:07.075 07:14:14 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:07.075 07:14:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.075 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:07.075 07:14:14 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:08:07.075 07:14:14 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:08:07.075 07:14:14 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:07.075 07:14:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.075 07:14:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.075 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:08:07.075 ************************************ 00:08:07.075 START TEST nvmf_tcp 00:08:07.075 ************************************ 00:08:07.075 07:14:14 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:07.336 * Looking for test storage... 00:08:07.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:07.337 07:14:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:07.337 07:14:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:07.337 07:14:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:07.337 07:14:14 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.337 07:14:14 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.337 07:14:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.337 ************************************ 00:08:07.337 START TEST nvmf_target_core 00:08:07.337 ************************************ 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:07.337 * Looking for test storage... 00:08:07.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.337 ************************************ 00:08:07.337 START TEST nvmf_abort 00:08:07.337 ************************************ 00:08:07.337 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:07.599 * Looking for test storage... 00:08:07.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.599 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.600 07:14:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.220 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.220 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.220 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.221 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.221 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.221 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.482 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.482 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.482 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.482 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.482 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.482 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.482 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:08:14.744 00:08:14.744 --- 10.0.0.2 ping statistics --- 00:08:14.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.744 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:08:14.744 00:08:14.744 --- 10.0.0.1 ping statistics --- 00:08:14.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.744 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4097144 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4097144 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 4097144 ']' 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.744 07:14:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.744 [2024-07-25 07:14:21.967954] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:08:14.744 [2024-07-25 07:14:21.968007] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.744 [2024-07-25 07:14:22.035810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.005 [2024-07-25 07:14:22.130855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.005 [2024-07-25 07:14:22.130917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.005 [2024-07-25 07:14:22.130925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.005 [2024-07-25 07:14:22.130932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.005 [2024-07-25 07:14:22.130937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.005 [2024-07-25 07:14:22.131021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.005 [2024-07-25 07:14:22.131188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.005 [2024-07-25 07:14:22.131188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 [2024-07-25 07:14:22.792700] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 Malloc0 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 Delay0 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 [2024-07-25 07:14:22.878573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.577 07:14:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:15.577 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.839 [2024-07-25 07:14:23.041414] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:18.384 Initializing NVMe Controllers 00:08:18.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:18.384 controller IO queue size 128 less than required 00:08:18.384 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:18.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:18.384 Initialization complete. Launching workers. 00:08:18.384 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32232 00:08:18.384 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32293, failed to submit 62 00:08:18.384 success 32236, unsuccess 57, failed 0 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.384 rmmod nvme_tcp 00:08:18.384 rmmod nvme_fabrics 00:08:18.384 rmmod nvme_keyring 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4097144 ']' 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4097144 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 4097144 ']' 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 4097144 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4097144 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4097144' 00:08:18.384 killing process with pid 4097144 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 4097144 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 4097144 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.384 07:14:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.298 00:08:20.298 real 0m12.815s 00:08:20.298 user 0m13.372s 00:08:20.298 sys 0m6.304s 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:20.298 ************************************ 00:08:20.298 END TEST nvmf_abort 00:08:20.298 ************************************ 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.298 ************************************ 00:08:20.298 START TEST nvmf_ns_hotplug_stress 00:08:20.298 ************************************ 00:08:20.298 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:20.586 * Looking for test storage... 00:08:20.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.586 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.587 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.587 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.587 07:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.736 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:28.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:28.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:28.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:28.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:28.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:08:28.737 00:08:28.737 --- 10.0.0.2 ping statistics --- 00:08:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.737 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:08:28.737 00:08:28.737 --- 10.0.0.1 ping statistics --- 00:08:28.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.737 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.737 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4101916 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4101916 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 4101916 ']' 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.738 07:14:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.738 [2024-07-25 07:14:35.000111] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:08:28.738 [2024-07-25 07:14:35.000162] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.738 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.738 [2024-07-25 07:14:35.083195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.738 [2024-07-25 07:14:35.170512] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.738 [2024-07-25 07:14:35.170564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.738 [2024-07-25 07:14:35.170572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.738 [2024-07-25 07:14:35.170579] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.738 [2024-07-25 07:14:35.170585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.738 [2024-07-25 07:14:35.170671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.738 [2024-07-25 07:14:35.170841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.738 [2024-07-25 07:14:35.170843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:28.738 [2024-07-25 07:14:35.956507] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.738 07:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:28.999 07:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.999 [2024-07-25 07:14:36.301205] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.999 07:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.260 07:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:29.557 Malloc0 00:08:29.557 07:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:29.557 Delay0 00:08:29.557 07:14:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.819 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:29.819 NULL1 00:08:29.819 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:30.081 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4102517 00:08:30.081 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:30.081 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:30.081 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.081 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.343 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.343 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:30.343 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:30.604 true 00:08:30.604 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:30.604 07:14:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.866 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.866 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:30.866 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:31.128 true 00:08:31.128 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:31.128 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.389 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.389 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:31.389 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:31.651 true 00:08:31.651 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:31.651 07:14:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.912 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.912 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:31.912 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:32.173 true 00:08:32.173 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:32.173 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.434 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.434 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:32.434 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:32.695 true 00:08:32.696 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:32.696 07:14:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.696 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.957 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:32.957 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:33.219 true 00:08:33.219 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:33.219 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.219 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.481 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:33.481 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:33.742 true 00:08:33.742 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:33.742 07:14:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.742 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.003 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:34.003 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:34.263 true 00:08:34.263 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:34.263 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.263 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.523 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:34.523 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:34.524 true 00:08:34.784 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:34.784 07:14:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.785 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.045 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:35.045 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:35.045 true 00:08:35.307 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:35.307 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.307 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.568 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:35.568 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:35.568 true 00:08:35.568 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:35.568 07:14:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.829 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.105 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:36.105 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:36.105 true 00:08:36.105 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:36.105 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.373 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.635 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:36.635 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:36.635 true 00:08:36.635 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:36.635 07:14:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.896 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.896 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:36.896 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:37.157 true 00:08:37.157 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:37.157 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.418 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.418 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:37.418 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:37.678 true 00:08:37.678 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:37.678 07:14:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.940 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.940 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:37.940 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:38.200 true 00:08:38.200 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:38.200 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.460 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.460 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:38.460 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:38.721 true 00:08:38.721 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:38.721 07:14:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.982 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.982 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:38.982 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:39.243 true 00:08:39.243 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:39.243 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.503 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.503 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:39.503 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:39.764 true 00:08:39.764 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:39.764 07:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.764 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.025 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:40.025 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:40.287 true 00:08:40.287 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:40.287 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.287 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.548 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:40.548 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:40.809 true 00:08:40.809 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:40.809 07:14:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.809 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.070 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:41.070 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:41.332 true 00:08:41.332 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:41.332 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.332 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.594 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:41.594 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:41.594 true 00:08:41.855 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:41.855 07:14:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.855 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.116 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:42.116 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:42.116 true 00:08:42.116 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:42.116 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.377 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.639 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:42.639 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:42.639 true 00:08:42.639 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:42.639 07:14:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.900 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.162 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:43.162 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:43.162 true 00:08:43.162 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:43.162 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.494 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.494 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:43.494 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:43.755 true 00:08:43.755 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:43.756 07:14:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.017 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.017 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:44.017 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:44.278 true 00:08:44.278 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:44.278 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.538 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.538 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:44.538 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:44.799 true 00:08:44.799 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:44.799 07:14:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.799 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.061 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:45.061 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:45.323 true 00:08:45.323 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:45.323 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.323 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.584 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:45.584 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:45.845 true 00:08:45.845 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:45.845 07:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.845 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.105 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:46.105 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:46.106 true 00:08:46.367 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:46.367 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.368 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.629 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:46.629 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:46.629 true 00:08:46.629 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:46.629 07:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.889 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.150 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:47.150 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:47.150 true 00:08:47.150 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:47.150 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.410 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.670 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:47.670 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:47.671 true 00:08:47.671 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:47.671 07:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.930 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.191 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:48.191 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:48.191 true 00:08:48.191 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:48.191 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.452 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.712 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:48.712 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:48.712 true 00:08:48.712 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:48.712 07:14:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.973 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.973 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:48.973 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:49.233 true 00:08:49.233 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:49.233 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.494 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.494 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:49.494 07:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:49.755 true 00:08:49.755 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:49.755 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.016 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.016 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:50.016 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:50.278 true 00:08:50.278 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:50.278 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.539 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.539 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:50.539 07:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:50.801 true 00:08:50.801 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:50.801 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.061 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.061 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:51.061 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:51.323 true 00:08:51.323 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:51.323 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.323 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.584 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:51.584 07:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:51.846 true 00:08:51.846 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:51.846 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.846 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.107 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:52.107 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:52.368 true 00:08:52.368 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:52.368 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.368 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.629 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:52.629 07:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:52.890 true 00:08:52.891 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:52.891 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.891 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.152 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:53.152 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:53.152 true 00:08:53.413 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:53.413 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.413 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.675 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:53.675 07:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:53.675 true 00:08:53.935 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:53.935 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.935 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.197 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:54.197 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:54.197 true 00:08:54.197 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:54.197 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.458 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.718 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:08:54.718 07:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:08:54.718 true 00:08:54.718 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:54.718 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.978 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.236 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:08:55.236 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:08:55.236 true 00:08:55.236 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:55.236 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.496 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.756 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:08:55.756 07:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:08:55.756 true 00:08:55.756 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:55.756 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.017 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.279 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:08:56.279 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:08:56.279 true 00:08:56.279 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:56.279 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.540 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.540 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:56.540 07:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:56.800 true 00:08:56.800 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:56.800 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.060 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.060 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:08:57.060 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:08:57.393 true 00:08:57.393 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:57.393 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.393 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.654 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:08:57.654 07:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:08:57.915 true 00:08:57.915 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:57.915 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.915 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.176 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:08:58.176 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:08:58.437 true 00:08:58.437 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:58.437 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.437 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.698 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:08:58.698 07:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:08:58.698 true 00:08:58.959 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:58.959 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.959 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.220 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:08:59.220 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:08:59.220 true 00:08:59.481 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:59.481 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.481 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.742 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:08:59.742 07:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:08:59.742 true 00:08:59.742 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:08:59.742 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.003 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.264 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:09:00.264 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:09:00.264 true 00:09:00.264 Initializing NVMe Controllers 00:09:00.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.264 Controller IO queue size 128, less than required. 00:09:00.264 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:00.264 Initialization complete. Launching workers. 00:09:00.264 ======================================================== 00:09:00.264 Latency(us) 00:09:00.264 Device Information : IOPS MiB/s Average min max 00:09:00.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31231.67 15.25 4098.31 2361.62 10592.30 00:09:00.264 ======================================================== 00:09:00.264 Total : 31231.67 15.25 4098.31 2361.62 10592.30 00:09:00.264 00:09:00.264 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4102517 00:09:00.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4102517) - No such process 00:09:00.264 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4102517 00:09:00.264 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.525 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.786 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:00.786 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:00.786 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:00.786 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:00.786 07:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:00.786 null0 00:09:00.786 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:00.786 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:00.786 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:01.047 null1 00:09:01.047 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:01.047 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.047 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:01.047 null2 00:09:01.047 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:01.047 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.047 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:01.308 null3 00:09:01.308 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:01.308 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.308 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:01.569 null4 00:09:01.569 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:01.569 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.569 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:01.569 null5 00:09:01.569 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:01.569 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.569 07:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:01.830 null6 00:09:01.830 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:01.830 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.830 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:01.830 null7 00:09:01.830 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:01.830 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:01.830 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.092 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4109095 4109100 4109103 4109107 4109112 4109116 4109119 4109122 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.093 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.354 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:02.617 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.878 07:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:02.878 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:03.139 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:03.140 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:03.400 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:03.661 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.662 07:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:03.662 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.662 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.662 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:03.662 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.662 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.662 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:03.662 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:03.923 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:03.923 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.923 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.923 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:03.924 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.185 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.445 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.705 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.706 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:04.706 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:04.706 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:04.706 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.706 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.706 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:04.706 07:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.706 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:04.966 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.228 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.489 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.751 07:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.751 rmmod nvme_tcp 00:09:05.751 rmmod nvme_fabrics 00:09:05.751 rmmod nvme_keyring 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4101916 ']' 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4101916 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 4101916 ']' 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 4101916 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4101916 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4101916' 00:09:05.751 killing process with pid 4101916 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 4101916 00:09:05.751 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 4101916 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.013 07:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.928 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.928 00:09:07.928 real 0m47.689s 00:09:07.928 user 3m13.501s 00:09:07.928 sys 0m16.477s 00:09:07.928 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.928 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.928 ************************************ 00:09:07.928 END TEST nvmf_ns_hotplug_stress 00:09:07.928 ************************************ 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.190 ************************************ 00:09:08.190 START TEST nvmf_delete_subsystem 00:09:08.190 ************************************ 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:08.190 * Looking for test storage... 00:09:08.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.190 07:15:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.338 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.338 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:16.338 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:16.338 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:16.338 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:16.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:16.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:16.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:16.339 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:16.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:09:16.339 00:09:16.339 --- 10.0.0.2 ping statistics --- 00:09:16.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.339 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:09:16.339 00:09:16.339 --- 10.0.0.1 ping statistics --- 00:09:16.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.339 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:16.339 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4114598 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4114598 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 4114598 ']' 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.340 07:15:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 [2024-07-25 07:15:22.810546] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:09:16.340 [2024-07-25 07:15:22.810611] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.340 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.340 [2024-07-25 07:15:22.881894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.340 [2024-07-25 07:15:22.956070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.340 [2024-07-25 07:15:22.956110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.340 [2024-07-25 07:15:22.956118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.340 [2024-07-25 07:15:22.956125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.340 [2024-07-25 07:15:22.956130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.340 [2024-07-25 07:15:22.956248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.340 [2024-07-25 07:15:22.956277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 [2024-07-25 07:15:23.640047] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 [2024-07-25 07:15:23.656199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 NULL1 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 Delay0 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4114930 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:16.340 07:15:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:16.601 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.601 [2024-07-25 07:15:23.740881] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:18.585 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.585 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.585 07:15:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.585 Write completed with error (sct=0, sc=8) 00:09:18.585 Write completed with error (sct=0, sc=8) 00:09:18.585 starting I/O failed: -6 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Write completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 starting I/O failed: -6 00:09:18.585 Write completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 starting I/O failed: -6 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Write completed with error (sct=0, sc=8) 00:09:18.585 starting I/O failed: -6 00:09:18.585 Write completed with error (sct=0, sc=8) 00:09:18.585 Write completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.585 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 [2024-07-25 07:15:25.835698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87d710 is same with the state(5) to be set 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 [2024-07-25 07:15:25.837650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87d000 is same with the state(5) to be set 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 starting I/O failed: -6 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 [2024-07-25 07:15:25.840316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f84a0000c00 is same with the state(5) to be set 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Write completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.586 Read completed with error (sct=0, sc=8) 00:09:18.587 Read completed with error (sct=0, sc=8) 00:09:18.587 Read completed with error (sct=0, sc=8) 00:09:18.587 Read completed with error (sct=0, sc=8) 00:09:18.587 Write completed with error (sct=0, sc=8) 00:09:18.587 Read completed with error (sct=0, sc=8) 00:09:18.587 Read completed with error (sct=0, sc=8) 00:09:18.587 Read completed with error (sct=0, sc=8) 00:09:18.587 Read completed with error (sct=0, sc=8) 00:09:18.587 Write completed with error (sct=0, sc=8) 00:09:19.531 [2024-07-25 07:15:26.800811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87eac0 is same with the state(5) to be set 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 [2024-07-25 07:15:26.839609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87d3e0 is same with the state(5) to be set 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 [2024-07-25 07:15:26.839720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87da40 is same with the state(5) to be set 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 [2024-07-25 07:15:26.842733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f84a000d7a0 is same with the state(5) to be set 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Read completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 Write completed with error (sct=0, sc=8) 00:09:19.531 [2024-07-25 07:15:26.842815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f84a000d000 is same with the state(5) to be set 00:09:19.531 Initializing NVMe Controllers 00:09:19.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:19.531 Controller IO queue size 128, less than required. 00:09:19.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:19.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:19.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:19.531 Initialization complete. Launching workers. 00:09:19.531 ======================================================== 00:09:19.531 Latency(us) 00:09:19.531 Device Information : IOPS MiB/s Average min max 00:09:19.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.37 0.08 909006.92 768.72 1007618.67 00:09:19.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.38 0.08 917706.81 211.81 1011284.86 00:09:19.531 ======================================================== 00:09:19.531 Total : 323.76 0.16 913316.71 211.81 1011284.86 00:09:19.531 00:09:19.531 [2024-07-25 07:15:26.843448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87eac0 (9): Bad file descriptor 00:09:19.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:19.531 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.531 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:19.531 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4114930 00:09:19.531 07:15:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:20.103 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:20.103 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4114930 00:09:20.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4114930) - No such process 00:09:20.103 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4114930 00:09:20.103 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:20.103 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 4114930 00:09:20.103 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 4114930 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.104 [2024-07-25 07:15:27.375912] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4115615 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:20.104 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:20.104 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.104 [2024-07-25 07:15:27.443050] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:20.675 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:20.675 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:20.675 07:15:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:21.246 07:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:21.246 07:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:21.246 07:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:21.817 07:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:21.817 07:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:21.817 07:15:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:22.078 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.078 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:22.078 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:22.650 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:22.650 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:22.650 07:15:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.221 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.221 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:23.221 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:23.482 Initializing NVMe Controllers 00:09:23.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:23.482 Controller IO queue size 128, less than required. 00:09:23.482 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:23.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:23.482 Initialization complete. Launching workers. 00:09:23.482 ======================================================== 00:09:23.482 Latency(us) 00:09:23.482 Device Information : IOPS MiB/s Average min max 00:09:23.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002565.93 1000247.84 1041223.34 00:09:23.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003225.52 1000367.99 1009666.35 00:09:23.482 ======================================================== 00:09:23.482 Total : 256.00 0.12 1002895.73 1000247.84 1041223.34 00:09:23.482 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4115615 00:09:23.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4115615) - No such process 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4115615 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.743 rmmod nvme_tcp 00:09:23.743 rmmod nvme_fabrics 00:09:23.743 rmmod nvme_keyring 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4114598 ']' 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4114598 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 4114598 ']' 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 4114598 00:09:23.743 07:15:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:23.743 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.743 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4114598 00:09:23.743 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.743 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.743 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4114598' 00:09:23.743 killing process with pid 4114598 00:09:23.743 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 4114598 00:09:23.743 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 4114598 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.005 07:15:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.919 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.919 00:09:25.919 real 0m17.917s 00:09:25.919 user 0m30.528s 00:09:25.919 sys 0m6.354s 00:09:25.919 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.919 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.919 ************************************ 00:09:25.919 END TEST nvmf_delete_subsystem 00:09:25.919 ************************************ 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.278 ************************************ 00:09:26.278 START TEST nvmf_host_management 00:09:26.278 ************************************ 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:26.278 * Looking for test storage... 00:09:26.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.278 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:26.279 07:15:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:32.867 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:32.867 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.867 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:32.867 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:32.868 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.868 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.129 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.129 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.129 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.129 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.129 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.129 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.389 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:09:33.389 00:09:33.389 --- 10.0.0.2 ping statistics --- 00:09:33.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.389 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:09:33.390 00:09:33.390 --- 10.0.0.1 ping statistics --- 00:09:33.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.390 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4120632 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4120632 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 4120632 ']' 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.390 07:15:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:33.390 [2024-07-25 07:15:40.617216] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:09:33.390 [2024-07-25 07:15:40.617265] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.390 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.390 [2024-07-25 07:15:40.700991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.651 [2024-07-25 07:15:40.766867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.651 [2024-07-25 07:15:40.766906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.651 [2024-07-25 07:15:40.766914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.651 [2024-07-25 07:15:40.766920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.651 [2024-07-25 07:15:40.766926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.651 [2024-07-25 07:15:40.768217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.651 [2024-07-25 07:15:40.768351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.651 [2024-07-25 07:15:40.768517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.651 [2024-07-25 07:15:40.768519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:34.223 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.223 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 [2024-07-25 07:15:41.436142] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 Malloc0 00:09:34.224 [2024-07-25 07:15:41.499533] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4120768 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4120768 /var/tmp/bdevperf.sock 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 4120768 ']' 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:34.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.224 { 00:09:34.224 "params": { 00:09:34.224 "name": "Nvme$subsystem", 00:09:34.224 "trtype": "$TEST_TRANSPORT", 00:09:34.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.224 "adrfam": "ipv4", 00:09:34.224 "trsvcid": "$NVMF_PORT", 00:09:34.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.224 "hdgst": ${hdgst:-false}, 00:09:34.224 "ddgst": ${ddgst:-false} 00:09:34.224 }, 00:09:34.224 "method": "bdev_nvme_attach_controller" 00:09:34.224 } 00:09:34.224 EOF 00:09:34.224 )") 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:34.224 07:15:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.224 "params": { 00:09:34.224 "name": "Nvme0", 00:09:34.224 "trtype": "tcp", 00:09:34.224 "traddr": "10.0.0.2", 00:09:34.224 "adrfam": "ipv4", 00:09:34.224 "trsvcid": "4420", 00:09:34.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:34.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:34.224 "hdgst": false, 00:09:34.224 "ddgst": false 00:09:34.224 }, 00:09:34.224 "method": "bdev_nvme_attach_controller" 00:09:34.224 }' 00:09:34.485 [2024-07-25 07:15:41.598623] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:09:34.485 [2024-07-25 07:15:41.598674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120768 ] 00:09:34.485 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.485 [2024-07-25 07:15:41.657271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.485 [2024-07-25 07:15:41.721841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.746 Running I/O for 10 seconds... 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=385 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 385 -ge 100 ']' 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.320 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 [2024-07-25 07:15:42.463047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.320 [2024-07-25 07:15:42.463251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.320 [2024-07-25 07:15:42.463261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.321 [2024-07-25 07:15:42.463793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.321 [2024-07-25 07:15:42.463803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.463990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.463999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.322 [2024-07-25 07:15:42.464137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d070 is same with the state(5) to be set 00:09:35.322 [2024-07-25 07:15:42.464185] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x235d070 was disconnected and freed. reset controller. 00:09:35.322 [2024-07-25 07:15:42.464226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.322 [2024-07-25 07:15:42.464236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.322 [2024-07-25 07:15:42.464252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.322 [2024-07-25 07:15:42.464266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:35.322 [2024-07-25 07:15:42.464284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.464290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b3d0 is same with the state(5) to be set 00:09:35.322 [2024-07-25 07:15:42.465496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:35.322 task offset: 56576 on job bdev=Nvme0n1 fails 00:09:35.322 00:09:35.322 Latency(us) 00:09:35.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.322 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:35.322 Job: Nvme0n1 ended in about 0.44 seconds with error 00:09:35.322 Verification LBA range: start 0x0 length 0x400 00:09:35.322 Nvme0n1 : 0.44 874.57 54.66 145.76 0.00 61100.16 1747.63 51991.89 00:09:35.322 =================================================================================================================== 00:09:35.322 Total : 874.57 54.66 145.76 0.00 61100.16 1747.63 51991.89 00:09:35.322 [2024-07-25 07:15:42.467473] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.322 [2024-07-25 07:15:42.467497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2b3d0 (9): Bad file descriptor 00:09:35.322 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.322 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:35.322 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.322 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:35.322 [2024-07-25 07:15:42.472358] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:35.322 [2024-07-25 07:15:42.472503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:35.322 [2024-07-25 07:15:42.472534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:35.322 [2024-07-25 07:15:42.472551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:35.322 [2024-07-25 07:15:42.472559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:35.322 [2024-07-25 07:15:42.472567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:35.322 [2024-07-25 07:15:42.472573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f2b3d0 00:09:35.323 [2024-07-25 07:15:42.472595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2b3d0 (9): Bad file descriptor 00:09:35.323 [2024-07-25 07:15:42.472607] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:09:35.323 [2024-07-25 07:15:42.472613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:09:35.323 [2024-07-25 07:15:42.472622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:09:35.323 [2024-07-25 07:15:42.472635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:35.323 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.323 07:15:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4120768 00:09:36.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4120768) - No such process 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:36.266 { 00:09:36.266 "params": { 00:09:36.266 "name": "Nvme$subsystem", 00:09:36.266 "trtype": "$TEST_TRANSPORT", 00:09:36.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:36.266 "adrfam": "ipv4", 00:09:36.266 "trsvcid": "$NVMF_PORT", 00:09:36.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:36.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:36.266 "hdgst": ${hdgst:-false}, 00:09:36.266 "ddgst": ${ddgst:-false} 00:09:36.266 }, 00:09:36.266 "method": "bdev_nvme_attach_controller" 00:09:36.266 } 00:09:36.266 EOF 00:09:36.266 )") 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:36.266 07:15:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:36.266 "params": { 00:09:36.266 "name": "Nvme0", 00:09:36.266 "trtype": "tcp", 00:09:36.266 "traddr": "10.0.0.2", 00:09:36.266 "adrfam": "ipv4", 00:09:36.266 "trsvcid": "4420", 00:09:36.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:36.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:36.266 "hdgst": false, 00:09:36.266 "ddgst": false 00:09:36.266 }, 00:09:36.266 "method": "bdev_nvme_attach_controller" 00:09:36.266 }' 00:09:36.266 [2024-07-25 07:15:43.544604] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:09:36.266 [2024-07-25 07:15:43.544660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4121255 ] 00:09:36.266 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.266 [2024-07-25 07:15:43.602322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.527 [2024-07-25 07:15:43.665850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.787 Running I/O for 1 seconds... 00:09:37.731 00:09:37.731 Latency(us) 00:09:37.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.731 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:37.731 Verification LBA range: start 0x0 length 0x400 00:09:37.731 Nvme0n1 : 1.05 1157.17 72.32 0.00 0.00 54395.53 10321.92 59856.21 00:09:37.731 =================================================================================================================== 00:09:37.731 Total : 1157.17 72.32 0.00 0.00 54395.53 10321.92 59856.21 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.992 rmmod nvme_tcp 00:09:37.992 rmmod nvme_fabrics 00:09:37.992 rmmod nvme_keyring 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4120632 ']' 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4120632 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 4120632 ']' 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 4120632 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4120632 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4120632' 00:09:37.992 killing process with pid 4120632 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 4120632 00:09:37.992 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 4120632 00:09:38.253 [2024-07-25 07:15:45.366489] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.253 07:15:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:40.169 00:09:40.169 real 0m14.123s 00:09:40.169 user 0m23.156s 00:09:40.169 sys 0m6.122s 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.169 ************************************ 00:09:40.169 END TEST nvmf_host_management 00:09:40.169 ************************************ 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.169 07:15:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.429 ************************************ 00:09:40.429 START TEST nvmf_lvol 00:09:40.429 ************************************ 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:40.429 * Looking for test storage... 00:09:40.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.429 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.430 07:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:47.020 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:47.020 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.020 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:47.021 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:47.021 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.021 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.282 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:09:47.283 00:09:47.283 --- 10.0.0.2 ping statistics --- 00:09:47.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.283 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:09:47.283 00:09:47.283 --- 10.0.0.1 ping statistics --- 00:09:47.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.283 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4125697 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4125697 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 4125697 ']' 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.283 07:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.544 [2024-07-25 07:15:54.666394] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:09:47.544 [2024-07-25 07:15:54.666466] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.544 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.544 [2024-07-25 07:15:54.739132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.544 [2024-07-25 07:15:54.813929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.544 [2024-07-25 07:15:54.813968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.544 [2024-07-25 07:15:54.813975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.544 [2024-07-25 07:15:54.813982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.544 [2024-07-25 07:15:54.813987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.544 [2024-07-25 07:15:54.814124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.544 [2024-07-25 07:15:54.814239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.544 [2024-07-25 07:15:54.814243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.117 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.117 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:48.117 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.117 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.117 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.378 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.378 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.378 [2024-07-25 07:15:55.634927] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.378 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.639 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:48.639 07:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.900 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:48.900 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:48.900 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:49.161 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=03a7cbc1-a84d-4676-be73-46456c1fa4aa 00:09:49.161 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 03a7cbc1-a84d-4676-be73-46456c1fa4aa lvol 20 00:09:49.422 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=83024301-c751-4b7a-9799-f8378b8a09de 00:09:49.422 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:49.422 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 83024301-c751-4b7a-9799-f8378b8a09de 00:09:49.683 07:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:49.944 [2024-07-25 07:15:57.059767] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.944 07:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.944 07:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:49.944 07:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4126341 00:09:49.944 07:15:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:49.944 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.887 07:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 83024301-c751-4b7a-9799-f8378b8a09de MY_SNAPSHOT 00:09:51.149 07:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f004dd64-34e1-472b-9714-227b28ed8f01 00:09:51.149 07:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 83024301-c751-4b7a-9799-f8378b8a09de 30 00:09:51.410 07:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f004dd64-34e1-472b-9714-227b28ed8f01 MY_CLONE 00:09:51.671 07:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5c0c048d-880b-4bdf-a070-2ac866aba07e 00:09:51.671 07:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5c0c048d-880b-4bdf-a070-2ac866aba07e 00:09:51.932 07:15:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4126341 00:10:01.941 Initializing NVMe Controllers 00:10:01.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:01.941 Controller IO queue size 128, less than required. 00:10:01.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:01.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:01.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:01.941 Initialization complete. Launching workers. 00:10:01.941 ======================================================== 00:10:01.941 Latency(us) 00:10:01.941 Device Information : IOPS MiB/s Average min max 00:10:01.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12404.10 48.45 10322.77 1412.66 58002.93 00:10:01.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17900.30 69.92 7151.62 3592.11 40144.19 00:10:01.941 ======================================================== 00:10:01.941 Total : 30304.40 118.38 8449.63 1412.66 58002.93 00:10:01.941 00:10:01.941 07:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:01.941 07:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 83024301-c751-4b7a-9799-f8378b8a09de 00:10:01.941 07:16:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 03a7cbc1-a84d-4676-be73-46456c1fa4aa 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.941 rmmod nvme_tcp 00:10:01.941 rmmod nvme_fabrics 00:10:01.941 rmmod nvme_keyring 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4125697 ']' 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4125697 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 4125697 ']' 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 4125697 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4125697 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4125697' 00:10:01.941 killing process with pid 4125697 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 4125697 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 4125697 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.941 07:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.332 00:10:03.332 real 0m23.007s 00:10:03.332 user 1m3.897s 00:10:03.332 sys 0m7.659s 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:03.332 ************************************ 00:10:03.332 END TEST nvmf_lvol 00:10:03.332 ************************************ 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.332 ************************************ 00:10:03.332 START TEST nvmf_lvs_grow 00:10:03.332 ************************************ 00:10:03.332 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:03.595 * Looking for test storage... 00:10:03.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.595 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.596 07:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.262 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.262 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.262 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:10.263 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:10.263 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:10.263 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:10.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.263 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.525 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.525 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.525 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:10.525 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.525 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.525 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:10.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:10:10.786 00:10:10.786 --- 10.0.0.2 ping statistics --- 00:10:10.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.786 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:10:10.786 00:10:10.786 --- 10.0.0.1 ping statistics --- 00:10:10.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.786 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4132759 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4132759 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 4132759 ']' 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.786 07:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.786 [2024-07-25 07:16:18.021469] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:10.786 [2024-07-25 07:16:18.021532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.786 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.786 [2024-07-25 07:16:18.092815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.047 [2024-07-25 07:16:18.166674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.047 [2024-07-25 07:16:18.166714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.047 [2024-07-25 07:16:18.166721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.047 [2024-07-25 07:16:18.166727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.047 [2024-07-25 07:16:18.166733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.047 [2024-07-25 07:16:18.166757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.619 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.619 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:11.619 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.619 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.619 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.619 07:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.619 [2024-07-25 07:16:18.974250] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.880 ************************************ 00:10:11.880 START TEST lvs_grow_clean 00:10:11.880 ************************************ 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:11.880 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:12.141 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:12.141 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:12.141 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:12.141 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:12.141 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:12.402 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:12.402 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:12.402 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f lvol 150 00:10:12.402 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ce4d212c-fad7-4351-b1bc-f968dc9f0b35 00:10:12.402 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:12.402 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:12.663 [2024-07-25 07:16:19.866756] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:12.663 [2024-07-25 07:16:19.866808] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:12.663 true 00:10:12.663 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:12.663 07:16:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:12.924 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:12.924 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:12.924 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce4d212c-fad7-4351-b1bc-f968dc9f0b35 00:10:13.185 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:13.185 [2024-07-25 07:16:20.496828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.185 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4133161 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4133161 /var/tmp/bdevperf.sock 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 4133161 ']' 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.446 07:16:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:13.446 [2024-07-25 07:16:20.712188] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:13.446 [2024-07-25 07:16:20.712244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133161 ] 00:10:13.446 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.446 [2024-07-25 07:16:20.789983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.706 [2024-07-25 07:16:20.853977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.279 07:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.279 07:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:14.279 07:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:14.539 Nvme0n1 00:10:14.539 07:16:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:14.801 [ 00:10:14.801 { 00:10:14.801 "name": "Nvme0n1", 00:10:14.801 "aliases": [ 00:10:14.801 "ce4d212c-fad7-4351-b1bc-f968dc9f0b35" 00:10:14.801 ], 00:10:14.801 "product_name": "NVMe disk", 00:10:14.801 "block_size": 4096, 00:10:14.801 "num_blocks": 38912, 00:10:14.801 "uuid": "ce4d212c-fad7-4351-b1bc-f968dc9f0b35", 00:10:14.801 "assigned_rate_limits": { 00:10:14.801 "rw_ios_per_sec": 0, 00:10:14.801 "rw_mbytes_per_sec": 0, 00:10:14.801 "r_mbytes_per_sec": 0, 00:10:14.801 "w_mbytes_per_sec": 0 00:10:14.801 }, 00:10:14.801 "claimed": false, 00:10:14.801 "zoned": false, 00:10:14.801 "supported_io_types": { 00:10:14.801 "read": true, 00:10:14.801 "write": true, 00:10:14.801 "unmap": true, 00:10:14.801 "flush": true, 00:10:14.801 "reset": true, 00:10:14.801 "nvme_admin": true, 00:10:14.801 "nvme_io": true, 00:10:14.801 "nvme_io_md": false, 00:10:14.801 "write_zeroes": true, 00:10:14.801 "zcopy": false, 00:10:14.801 "get_zone_info": false, 00:10:14.801 "zone_management": false, 00:10:14.801 "zone_append": false, 00:10:14.801 "compare": true, 00:10:14.801 "compare_and_write": true, 00:10:14.801 "abort": true, 00:10:14.801 "seek_hole": false, 00:10:14.801 "seek_data": false, 00:10:14.801 "copy": true, 00:10:14.801 "nvme_iov_md": false 00:10:14.801 }, 00:10:14.801 "memory_domains": [ 00:10:14.801 { 00:10:14.801 "dma_device_id": "system", 00:10:14.801 "dma_device_type": 1 00:10:14.801 } 00:10:14.801 ], 00:10:14.801 "driver_specific": { 00:10:14.801 "nvme": [ 00:10:14.801 { 00:10:14.801 "trid": { 00:10:14.801 "trtype": "TCP", 00:10:14.801 "adrfam": "IPv4", 00:10:14.801 "traddr": "10.0.0.2", 00:10:14.801 "trsvcid": "4420", 00:10:14.801 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:14.801 }, 00:10:14.801 "ctrlr_data": { 00:10:14.801 "cntlid": 1, 00:10:14.801 "vendor_id": "0x8086", 00:10:14.801 "model_number": "SPDK bdev Controller", 00:10:14.801 "serial_number": "SPDK0", 00:10:14.801 "firmware_revision": "24.09", 00:10:14.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:14.801 "oacs": { 00:10:14.801 "security": 0, 00:10:14.801 "format": 0, 00:10:14.801 "firmware": 0, 00:10:14.801 "ns_manage": 0 00:10:14.801 }, 00:10:14.801 "multi_ctrlr": true, 00:10:14.801 "ana_reporting": false 00:10:14.801 }, 00:10:14.801 "vs": { 00:10:14.801 "nvme_version": "1.3" 00:10:14.801 }, 00:10:14.801 "ns_data": { 00:10:14.801 "id": 1, 00:10:14.801 "can_share": true 00:10:14.801 } 00:10:14.801 } 00:10:14.801 ], 00:10:14.801 "mp_policy": "active_passive" 00:10:14.801 } 00:10:14.801 } 00:10:14.801 ] 00:10:14.801 07:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4133492 00:10:14.801 07:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:14.801 07:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:14.801 Running I/O for 10 seconds... 00:10:15.745 Latency(us) 00:10:15.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.745 Nvme0n1 : 1.00 17978.00 70.23 0.00 0.00 0.00 0.00 0.00 00:10:15.745 =================================================================================================================== 00:10:15.745 Total : 17978.00 70.23 0.00 0.00 0.00 0.00 0.00 00:10:15.745 00:10:16.686 07:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:16.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.948 Nvme0n1 : 2.00 18141.00 70.86 0.00 0.00 0.00 0.00 0.00 00:10:16.948 =================================================================================================================== 00:10:16.948 Total : 18141.00 70.86 0.00 0.00 0.00 0.00 0.00 00:10:16.948 00:10:16.948 true 00:10:16.948 07:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:16.948 07:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:17.209 07:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:17.209 07:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:17.209 07:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4133492 00:10:17.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.781 Nvme0n1 : 3.00 18216.33 71.16 0.00 0.00 0.00 0.00 0.00 00:10:17.781 =================================================================================================================== 00:10:17.781 Total : 18216.33 71.16 0.00 0.00 0.00 0.00 0.00 00:10:17.781 00:10:19.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.167 Nvme0n1 : 4.00 18254.25 71.31 0.00 0.00 0.00 0.00 0.00 00:10:19.167 =================================================================================================================== 00:10:19.167 Total : 18254.25 71.31 0.00 0.00 0.00 0.00 0.00 00:10:19.167 00:10:19.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.739 Nvme0n1 : 5.00 18277.00 71.39 0.00 0.00 0.00 0.00 0.00 00:10:19.739 =================================================================================================================== 00:10:19.739 Total : 18277.00 71.39 0.00 0.00 0.00 0.00 0.00 00:10:19.739 00:10:21.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.124 Nvme0n1 : 6.00 18302.83 71.50 0.00 0.00 0.00 0.00 0.00 00:10:21.124 =================================================================================================================== 00:10:21.124 Total : 18302.83 71.50 0.00 0.00 0.00 0.00 0.00 00:10:21.124 00:10:22.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.069 Nvme0n1 : 7.00 18321.29 71.57 0.00 0.00 0.00 0.00 0.00 00:10:22.069 =================================================================================================================== 00:10:22.069 Total : 18321.29 71.57 0.00 0.00 0.00 0.00 0.00 00:10:22.069 00:10:23.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.012 Nvme0n1 : 8.00 18329.25 71.60 0.00 0.00 0.00 0.00 0.00 00:10:23.012 =================================================================================================================== 00:10:23.012 Total : 18329.25 71.60 0.00 0.00 0.00 0.00 0.00 00:10:23.012 00:10:23.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.955 Nvme0n1 : 9.00 18340.67 71.64 0.00 0.00 0.00 0.00 0.00 00:10:23.955 =================================================================================================================== 00:10:23.955 Total : 18340.67 71.64 0.00 0.00 0.00 0.00 0.00 00:10:23.955 00:10:24.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.898 Nvme0n1 : 10.00 18354.50 71.70 0.00 0.00 0.00 0.00 0.00 00:10:24.898 =================================================================================================================== 00:10:24.898 Total : 18354.50 71.70 0.00 0.00 0.00 0.00 0.00 00:10:24.898 00:10:24.898 00:10:24.898 Latency(us) 00:10:24.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.898 Nvme0n1 : 10.01 18354.94 71.70 0.00 0.00 6970.50 4778.67 19442.35 00:10:24.898 =================================================================================================================== 00:10:24.898 Total : 18354.94 71.70 0.00 0.00 6970.50 4778.67 19442.35 00:10:24.898 0 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4133161 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 4133161 ']' 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 4133161 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4133161 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4133161' 00:10:24.898 killing process with pid 4133161 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 4133161 00:10:24.898 Received shutdown signal, test time was about 10.000000 seconds 00:10:24.898 00:10:24.898 Latency(us) 00:10:24.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.898 =================================================================================================================== 00:10:24.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:24.898 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 4133161 00:10:25.159 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:25.159 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:25.429 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:25.429 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:25.429 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:25.429 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:25.429 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:25.689 [2024-07-25 07:16:32.916676] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:25.689 07:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:25.950 request: 00:10:25.950 { 00:10:25.950 "uuid": "12a50e31-1da5-4785-ada3-7ca7f3dd211f", 00:10:25.950 "method": "bdev_lvol_get_lvstores", 00:10:25.950 "req_id": 1 00:10:25.950 } 00:10:25.950 Got JSON-RPC error response 00:10:25.950 response: 00:10:25.950 { 00:10:25.950 "code": -19, 00:10:25.950 "message": "No such device" 00:10:25.950 } 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.950 aio_bdev 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ce4d212c-fad7-4351-b1bc-f968dc9f0b35 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ce4d212c-fad7-4351-b1bc-f968dc9f0b35 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.950 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:26.211 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ce4d212c-fad7-4351-b1bc-f968dc9f0b35 -t 2000 00:10:26.211 [ 00:10:26.211 { 00:10:26.211 "name": "ce4d212c-fad7-4351-b1bc-f968dc9f0b35", 00:10:26.211 "aliases": [ 00:10:26.211 "lvs/lvol" 00:10:26.211 ], 00:10:26.211 "product_name": "Logical Volume", 00:10:26.211 "block_size": 4096, 00:10:26.211 "num_blocks": 38912, 00:10:26.211 "uuid": "ce4d212c-fad7-4351-b1bc-f968dc9f0b35", 00:10:26.211 "assigned_rate_limits": { 00:10:26.211 "rw_ios_per_sec": 0, 00:10:26.211 "rw_mbytes_per_sec": 0, 00:10:26.211 "r_mbytes_per_sec": 0, 00:10:26.211 "w_mbytes_per_sec": 0 00:10:26.211 }, 00:10:26.211 "claimed": false, 00:10:26.211 "zoned": false, 00:10:26.211 "supported_io_types": { 00:10:26.211 "read": true, 00:10:26.211 "write": true, 00:10:26.211 "unmap": true, 00:10:26.211 "flush": false, 00:10:26.211 "reset": true, 00:10:26.211 "nvme_admin": false, 00:10:26.211 "nvme_io": false, 00:10:26.211 "nvme_io_md": false, 00:10:26.211 "write_zeroes": true, 00:10:26.211 "zcopy": false, 00:10:26.211 "get_zone_info": false, 00:10:26.211 "zone_management": false, 00:10:26.211 "zone_append": false, 00:10:26.211 "compare": false, 00:10:26.211 "compare_and_write": false, 00:10:26.211 "abort": false, 00:10:26.211 "seek_hole": true, 00:10:26.211 "seek_data": true, 00:10:26.211 "copy": false, 00:10:26.211 "nvme_iov_md": false 00:10:26.211 }, 00:10:26.211 "driver_specific": { 00:10:26.211 "lvol": { 00:10:26.211 "lvol_store_uuid": "12a50e31-1da5-4785-ada3-7ca7f3dd211f", 00:10:26.211 "base_bdev": "aio_bdev", 00:10:26.211 "thin_provision": false, 00:10:26.211 "num_allocated_clusters": 38, 00:10:26.211 "snapshot": false, 00:10:26.211 "clone": false, 00:10:26.211 "esnap_clone": false 00:10:26.211 } 00:10:26.211 } 00:10:26.211 } 00:10:26.211 ] 00:10:26.211 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:26.211 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:26.211 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:26.471 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:26.471 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:26.471 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:26.731 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:26.731 07:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce4d212c-fad7-4351-b1bc-f968dc9f0b35 00:10:26.731 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 12a50e31-1da5-4785-ada3-7ca7f3dd211f 00:10:26.992 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:26.992 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:27.252 00:10:27.252 real 0m15.321s 00:10:27.252 user 0m14.995s 00:10:27.252 sys 0m1.322s 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:27.252 ************************************ 00:10:27.252 END TEST lvs_grow_clean 00:10:27.252 ************************************ 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:27.252 ************************************ 00:10:27.252 START TEST lvs_grow_dirty 00:10:27.252 ************************************ 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:27.252 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:27.253 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:27.253 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:27.513 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:27.513 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:27.513 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:27.513 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:27.513 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:27.774 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:27.774 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:27.774 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 lvol 150 00:10:27.774 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5c715545-96ae-4b76-9835-950e0904ef86 00:10:27.774 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:27.774 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:28.035 [2024-07-25 07:16:35.240233] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:28.035 [2024-07-25 07:16:35.240287] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:28.035 true 00:10:28.035 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:28.035 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:28.295 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:28.295 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:28.295 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c715545-96ae-4b76-9835-950e0904ef86 00:10:28.556 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:28.556 [2024-07-25 07:16:35.858113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.556 07:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4136261 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4136261 /var/tmp/bdevperf.sock 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 4136261 ']' 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:28.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.818 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:28.818 [2024-07-25 07:16:36.067615] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:28.818 [2024-07-25 07:16:36.067667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136261 ] 00:10:28.818 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.818 [2024-07-25 07:16:36.141108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.079 [2024-07-25 07:16:36.195039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.650 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.650 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:29.650 07:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:29.911 Nvme0n1 00:10:29.911 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:29.911 [ 00:10:29.911 { 00:10:29.911 "name": "Nvme0n1", 00:10:29.911 "aliases": [ 00:10:29.911 "5c715545-96ae-4b76-9835-950e0904ef86" 00:10:29.911 ], 00:10:29.911 "product_name": "NVMe disk", 00:10:29.911 "block_size": 4096, 00:10:29.911 "num_blocks": 38912, 00:10:29.911 "uuid": "5c715545-96ae-4b76-9835-950e0904ef86", 00:10:29.911 "assigned_rate_limits": { 00:10:29.911 "rw_ios_per_sec": 0, 00:10:29.911 "rw_mbytes_per_sec": 0, 00:10:29.911 "r_mbytes_per_sec": 0, 00:10:29.911 "w_mbytes_per_sec": 0 00:10:29.911 }, 00:10:29.911 "claimed": false, 00:10:29.911 "zoned": false, 00:10:29.911 "supported_io_types": { 00:10:29.911 "read": true, 00:10:29.911 "write": true, 00:10:29.911 "unmap": true, 00:10:29.911 "flush": true, 00:10:29.911 "reset": true, 00:10:29.911 "nvme_admin": true, 00:10:29.911 "nvme_io": true, 00:10:29.911 "nvme_io_md": false, 00:10:29.911 "write_zeroes": true, 00:10:29.911 "zcopy": false, 00:10:29.911 "get_zone_info": false, 00:10:29.911 "zone_management": false, 00:10:29.911 "zone_append": false, 00:10:29.911 "compare": true, 00:10:29.911 "compare_and_write": true, 00:10:29.911 "abort": true, 00:10:29.911 "seek_hole": false, 00:10:29.911 "seek_data": false, 00:10:29.911 "copy": true, 00:10:29.911 "nvme_iov_md": false 00:10:29.911 }, 00:10:29.911 "memory_domains": [ 00:10:29.911 { 00:10:29.911 "dma_device_id": "system", 00:10:29.911 "dma_device_type": 1 00:10:29.911 } 00:10:29.911 ], 00:10:29.911 "driver_specific": { 00:10:29.911 "nvme": [ 00:10:29.911 { 00:10:29.911 "trid": { 00:10:29.911 "trtype": "TCP", 00:10:29.911 "adrfam": "IPv4", 00:10:29.911 "traddr": "10.0.0.2", 00:10:29.911 "trsvcid": "4420", 00:10:29.911 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:29.911 }, 00:10:29.911 "ctrlr_data": { 00:10:29.911 "cntlid": 1, 00:10:29.911 "vendor_id": "0x8086", 00:10:29.911 "model_number": "SPDK bdev Controller", 00:10:29.911 "serial_number": "SPDK0", 00:10:29.911 "firmware_revision": "24.09", 00:10:29.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:29.911 "oacs": { 00:10:29.911 "security": 0, 00:10:29.911 "format": 0, 00:10:29.911 "firmware": 0, 00:10:29.911 "ns_manage": 0 00:10:29.911 }, 00:10:29.911 "multi_ctrlr": true, 00:10:29.911 "ana_reporting": false 00:10:29.911 }, 00:10:29.911 "vs": { 00:10:29.911 "nvme_version": "1.3" 00:10:29.911 }, 00:10:29.911 "ns_data": { 00:10:29.911 "id": 1, 00:10:29.911 "can_share": true 00:10:29.911 } 00:10:29.911 } 00:10:29.911 ], 00:10:29.911 "mp_policy": "active_passive" 00:10:29.911 } 00:10:29.911 } 00:10:29.911 ] 00:10:29.911 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4136580 00:10:29.911 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:29.911 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:30.172 Running I/O for 10 seconds... 00:10:31.115 Latency(us) 00:10:31.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.115 Nvme0n1 : 1.00 17580.00 68.67 0.00 0.00 0.00 0.00 0.00 00:10:31.115 =================================================================================================================== 00:10:31.115 Total : 17580.00 68.67 0.00 0.00 0.00 0.00 0.00 00:10:31.115 00:10:32.057 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:32.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.057 Nvme0n1 : 2.00 17674.00 69.04 0.00 0.00 0.00 0.00 0.00 00:10:32.057 =================================================================================================================== 00:10:32.057 Total : 17674.00 69.04 0.00 0.00 0.00 0.00 0.00 00:10:32.057 00:10:32.057 true 00:10:32.318 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:32.318 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:32.318 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:32.318 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:32.318 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4136580 00:10:33.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.259 Nvme0n1 : 3.00 17697.33 69.13 0.00 0.00 0.00 0.00 0.00 00:10:33.259 =================================================================================================================== 00:10:33.259 Total : 17697.33 69.13 0.00 0.00 0.00 0.00 0.00 00:10:33.259 00:10:34.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.201 Nvme0n1 : 4.00 17723.00 69.23 0.00 0.00 0.00 0.00 0.00 00:10:34.201 =================================================================================================================== 00:10:34.201 Total : 17723.00 69.23 0.00 0.00 0.00 0.00 0.00 00:10:34.201 00:10:35.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.143 Nvme0n1 : 5.00 17751.20 69.34 0.00 0.00 0.00 0.00 0.00 00:10:35.143 =================================================================================================================== 00:10:35.143 Total : 17751.20 69.34 0.00 0.00 0.00 0.00 0.00 00:10:35.143 00:10:36.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.085 Nvme0n1 : 6.00 17768.67 69.41 0.00 0.00 0.00 0.00 0.00 00:10:36.085 =================================================================================================================== 00:10:36.085 Total : 17768.67 69.41 0.00 0.00 0.00 0.00 0.00 00:10:36.085 00:10:37.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.025 Nvme0n1 : 7.00 17786.86 69.48 0.00 0.00 0.00 0.00 0.00 00:10:37.025 =================================================================================================================== 00:10:37.025 Total : 17786.86 69.48 0.00 0.00 0.00 0.00 0.00 00:10:37.025 00:10:38.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.409 Nvme0n1 : 8.00 17799.50 69.53 0.00 0.00 0.00 0.00 0.00 00:10:38.409 =================================================================================================================== 00:10:38.409 Total : 17799.50 69.53 0.00 0.00 0.00 0.00 0.00 00:10:38.409 00:10:39.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.351 Nvme0n1 : 9.00 17812.00 69.58 0.00 0.00 0.00 0.00 0.00 00:10:39.351 =================================================================================================================== 00:10:39.351 Total : 17812.00 69.58 0.00 0.00 0.00 0.00 0.00 00:10:39.351 00:10:40.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.295 Nvme0n1 : 10.00 17821.20 69.61 0.00 0.00 0.00 0.00 0.00 00:10:40.295 =================================================================================================================== 00:10:40.295 Total : 17821.20 69.61 0.00 0.00 0.00 0.00 0.00 00:10:40.295 00:10:40.295 00:10:40.295 Latency(us) 00:10:40.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.295 Nvme0n1 : 10.01 17821.25 69.61 0.00 0.00 7177.85 4478.29 12397.23 00:10:40.295 =================================================================================================================== 00:10:40.295 Total : 17821.25 69.61 0.00 0.00 7177.85 4478.29 12397.23 00:10:40.295 0 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4136261 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 4136261 ']' 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 4136261 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4136261 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4136261' 00:10:40.295 killing process with pid 4136261 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 4136261 00:10:40.295 Received shutdown signal, test time was about 10.000000 seconds 00:10:40.295 00:10:40.295 Latency(us) 00:10:40.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.295 =================================================================================================================== 00:10:40.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 4136261 00:10:40.295 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.556 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:40.556 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:40.556 07:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:40.817 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:40.817 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:40.817 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4132759 00:10:40.817 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4132759 00:10:40.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4132759 Killed "${NVMF_APP[@]}" "$@" 00:10:40.817 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4138715 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4138715 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 4138715 ']' 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.818 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:40.818 [2024-07-25 07:16:48.148820] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:40.818 [2024-07-25 07:16:48.148878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.818 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.079 [2024-07-25 07:16:48.216760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.079 [2024-07-25 07:16:48.284047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.079 [2024-07-25 07:16:48.284086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.079 [2024-07-25 07:16:48.284094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.079 [2024-07-25 07:16:48.284100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.079 [2024-07-25 07:16:48.284106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.079 [2024-07-25 07:16:48.284130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.649 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.649 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:41.649 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.649 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.649 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.649 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.649 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:41.910 [2024-07-25 07:16:49.097287] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:41.910 [2024-07-25 07:16:49.097378] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:41.910 [2024-07-25 07:16:49.097413] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5c715545-96ae-4b76-9835-950e0904ef86 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5c715545-96ae-4b76-9835-950e0904ef86 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:41.910 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5c715545-96ae-4b76-9835-950e0904ef86 -t 2000 00:10:42.171 [ 00:10:42.171 { 00:10:42.171 "name": "5c715545-96ae-4b76-9835-950e0904ef86", 00:10:42.171 "aliases": [ 00:10:42.171 "lvs/lvol" 00:10:42.171 ], 00:10:42.171 "product_name": "Logical Volume", 00:10:42.171 "block_size": 4096, 00:10:42.171 "num_blocks": 38912, 00:10:42.171 "uuid": "5c715545-96ae-4b76-9835-950e0904ef86", 00:10:42.171 "assigned_rate_limits": { 00:10:42.171 "rw_ios_per_sec": 0, 00:10:42.171 "rw_mbytes_per_sec": 0, 00:10:42.171 "r_mbytes_per_sec": 0, 00:10:42.171 "w_mbytes_per_sec": 0 00:10:42.171 }, 00:10:42.171 "claimed": false, 00:10:42.171 "zoned": false, 00:10:42.171 "supported_io_types": { 00:10:42.171 "read": true, 00:10:42.171 "write": true, 00:10:42.171 "unmap": true, 00:10:42.171 "flush": false, 00:10:42.171 "reset": true, 00:10:42.171 "nvme_admin": false, 00:10:42.171 "nvme_io": false, 00:10:42.171 "nvme_io_md": false, 00:10:42.171 "write_zeroes": true, 00:10:42.171 "zcopy": false, 00:10:42.171 "get_zone_info": false, 00:10:42.171 "zone_management": false, 00:10:42.171 "zone_append": false, 00:10:42.171 "compare": false, 00:10:42.171 "compare_and_write": false, 00:10:42.171 "abort": false, 00:10:42.171 "seek_hole": true, 00:10:42.171 "seek_data": true, 00:10:42.171 "copy": false, 00:10:42.171 "nvme_iov_md": false 00:10:42.171 }, 00:10:42.171 "driver_specific": { 00:10:42.171 "lvol": { 00:10:42.171 "lvol_store_uuid": "5f731bfc-4a4c-4933-a017-f2525377d9b2", 00:10:42.171 "base_bdev": "aio_bdev", 00:10:42.171 "thin_provision": false, 00:10:42.171 "num_allocated_clusters": 38, 00:10:42.171 "snapshot": false, 00:10:42.171 "clone": false, 00:10:42.171 "esnap_clone": false 00:10:42.171 } 00:10:42.171 } 00:10:42.171 } 00:10:42.171 ] 00:10:42.171 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:42.171 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:42.171 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:42.433 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:42.433 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:42.433 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:42.433 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:42.433 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:42.694 [2024-07-25 07:16:49.861167] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:42.694 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:42.956 request: 00:10:42.956 { 00:10:42.956 "uuid": "5f731bfc-4a4c-4933-a017-f2525377d9b2", 00:10:42.956 "method": "bdev_lvol_get_lvstores", 00:10:42.956 "req_id": 1 00:10:42.956 } 00:10:42.956 Got JSON-RPC error response 00:10:42.956 response: 00:10:42.956 { 00:10:42.956 "code": -19, 00:10:42.956 "message": "No such device" 00:10:42.956 } 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:42.956 aio_bdev 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5c715545-96ae-4b76-9835-950e0904ef86 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5c715545-96ae-4b76-9835-950e0904ef86 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.956 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:43.217 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5c715545-96ae-4b76-9835-950e0904ef86 -t 2000 00:10:43.218 [ 00:10:43.218 { 00:10:43.218 "name": "5c715545-96ae-4b76-9835-950e0904ef86", 00:10:43.218 "aliases": [ 00:10:43.218 "lvs/lvol" 00:10:43.218 ], 00:10:43.218 "product_name": "Logical Volume", 00:10:43.218 "block_size": 4096, 00:10:43.218 "num_blocks": 38912, 00:10:43.218 "uuid": "5c715545-96ae-4b76-9835-950e0904ef86", 00:10:43.218 "assigned_rate_limits": { 00:10:43.218 "rw_ios_per_sec": 0, 00:10:43.218 "rw_mbytes_per_sec": 0, 00:10:43.218 "r_mbytes_per_sec": 0, 00:10:43.218 "w_mbytes_per_sec": 0 00:10:43.218 }, 00:10:43.218 "claimed": false, 00:10:43.218 "zoned": false, 00:10:43.218 "supported_io_types": { 00:10:43.218 "read": true, 00:10:43.218 "write": true, 00:10:43.218 "unmap": true, 00:10:43.218 "flush": false, 00:10:43.218 "reset": true, 00:10:43.218 "nvme_admin": false, 00:10:43.218 "nvme_io": false, 00:10:43.218 "nvme_io_md": false, 00:10:43.218 "write_zeroes": true, 00:10:43.218 "zcopy": false, 00:10:43.218 "get_zone_info": false, 00:10:43.218 "zone_management": false, 00:10:43.218 "zone_append": false, 00:10:43.218 "compare": false, 00:10:43.218 "compare_and_write": false, 00:10:43.218 "abort": false, 00:10:43.218 "seek_hole": true, 00:10:43.218 "seek_data": true, 00:10:43.218 "copy": false, 00:10:43.218 "nvme_iov_md": false 00:10:43.218 }, 00:10:43.218 "driver_specific": { 00:10:43.218 "lvol": { 00:10:43.218 "lvol_store_uuid": "5f731bfc-4a4c-4933-a017-f2525377d9b2", 00:10:43.218 "base_bdev": "aio_bdev", 00:10:43.218 "thin_provision": false, 00:10:43.218 "num_allocated_clusters": 38, 00:10:43.218 "snapshot": false, 00:10:43.218 "clone": false, 00:10:43.218 "esnap_clone": false 00:10:43.218 } 00:10:43.218 } 00:10:43.218 } 00:10:43.218 ] 00:10:43.218 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:43.218 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:43.218 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:43.479 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:43.479 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:43.479 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:43.739 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:43.739 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c715545-96ae-4b76-9835-950e0904ef86 00:10:43.739 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f731bfc-4a4c-4933-a017-f2525377d9b2 00:10:44.000 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:44.261 00:10:44.261 real 0m16.964s 00:10:44.261 user 0m44.368s 00:10:44.261 sys 0m2.902s 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.261 ************************************ 00:10:44.261 END TEST lvs_grow_dirty 00:10:44.261 ************************************ 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:44.261 nvmf_trace.0 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.261 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.262 rmmod nvme_tcp 00:10:44.262 rmmod nvme_fabrics 00:10:44.262 rmmod nvme_keyring 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4138715 ']' 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4138715 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 4138715 ']' 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 4138715 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.262 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4138715 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4138715' 00:10:44.527 killing process with pid 4138715 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 4138715 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 4138715 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.527 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.489 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.489 00:10:46.489 real 0m43.209s 00:10:46.489 user 1m5.315s 00:10:46.489 sys 0m10.058s 00:10:46.489 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.489 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:46.489 ************************************ 00:10:46.489 END TEST nvmf_lvs_grow 00:10:46.489 ************************************ 00:10:46.751 07:16:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:46.751 07:16:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.751 07:16:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.751 07:16:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.751 ************************************ 00:10:46.751 START TEST nvmf_bdev_io_wait 00:10:46.751 ************************************ 00:10:46.751 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:46.751 * Looking for test storage... 00:10:46.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.751 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:46.752 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.900 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:54.901 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:54.901 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:54.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:54.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.901 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.901 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.901 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.901 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:54.901 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.901 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:54.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:10:54.902 00:10:54.902 --- 10.0.0.2 ping statistics --- 00:10:54.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.902 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:10:54.902 00:10:54.902 --- 10.0.0.1 ping statistics --- 00:10:54.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.902 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4143672 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4143672 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 4143672 ']' 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.902 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 [2024-07-25 07:17:01.369500] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:54.902 [2024-07-25 07:17:01.369576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.902 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.902 [2024-07-25 07:17:01.440269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.902 [2024-07-25 07:17:01.506623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.902 [2024-07-25 07:17:01.506663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.902 [2024-07-25 07:17:01.506670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.902 [2024-07-25 07:17:01.506677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.902 [2024-07-25 07:17:01.506682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.902 [2024-07-25 07:17:01.506828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.902 [2024-07-25 07:17:01.506946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.902 [2024-07-25 07:17:01.507103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.902 [2024-07-25 07:17:01.507104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.902 [2024-07-25 07:17:02.247399] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.902 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.164 Malloc0 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.164 [2024-07-25 07:17:02.326400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4143987 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4143990 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:55.164 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:55.165 { 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme$subsystem", 00:10:55.165 "trtype": "$TEST_TRANSPORT", 00:10:55.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "$NVMF_PORT", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.165 "hdgst": ${hdgst:-false}, 00:10:55.165 "ddgst": ${ddgst:-false} 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 } 00:10:55.165 EOF 00:10:55.165 )") 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4143992 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:55.165 { 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme$subsystem", 00:10:55.165 "trtype": "$TEST_TRANSPORT", 00:10:55.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "$NVMF_PORT", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.165 "hdgst": ${hdgst:-false}, 00:10:55.165 "ddgst": ${ddgst:-false} 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 } 00:10:55.165 EOF 00:10:55.165 )") 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4143996 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:55.165 { 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme$subsystem", 00:10:55.165 "trtype": "$TEST_TRANSPORT", 00:10:55.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "$NVMF_PORT", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.165 "hdgst": ${hdgst:-false}, 00:10:55.165 "ddgst": ${ddgst:-false} 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 } 00:10:55.165 EOF 00:10:55.165 )") 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:55.165 { 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme$subsystem", 00:10:55.165 "trtype": "$TEST_TRANSPORT", 00:10:55.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "$NVMF_PORT", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:55.165 "hdgst": ${hdgst:-false}, 00:10:55.165 "ddgst": ${ddgst:-false} 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 } 00:10:55.165 EOF 00:10:55.165 )") 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4143987 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme1", 00:10:55.165 "trtype": "tcp", 00:10:55.165 "traddr": "10.0.0.2", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "4420", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.165 "hdgst": false, 00:10:55.165 "ddgst": false 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 }' 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme1", 00:10:55.165 "trtype": "tcp", 00:10:55.165 "traddr": "10.0.0.2", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "4420", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.165 "hdgst": false, 00:10:55.165 "ddgst": false 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 }' 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme1", 00:10:55.165 "trtype": "tcp", 00:10:55.165 "traddr": "10.0.0.2", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "4420", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.165 "hdgst": false, 00:10:55.165 "ddgst": false 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 }' 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:55.165 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:55.165 "params": { 00:10:55.165 "name": "Nvme1", 00:10:55.165 "trtype": "tcp", 00:10:55.165 "traddr": "10.0.0.2", 00:10:55.165 "adrfam": "ipv4", 00:10:55.165 "trsvcid": "4420", 00:10:55.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.165 "hdgst": false, 00:10:55.165 "ddgst": false 00:10:55.165 }, 00:10:55.165 "method": "bdev_nvme_attach_controller" 00:10:55.165 }' 00:10:55.165 [2024-07-25 07:17:02.379429] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:55.165 [2024-07-25 07:17:02.379483] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:55.165 [2024-07-25 07:17:02.380405] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:55.165 [2024-07-25 07:17:02.380451] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:55.165 [2024-07-25 07:17:02.383738] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:55.165 [2024-07-25 07:17:02.383785] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:55.165 [2024-07-25 07:17:02.385504] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:10:55.165 [2024-07-25 07:17:02.385548] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:55.165 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.165 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.165 [2024-07-25 07:17:02.528259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.425 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.425 [2024-07-25 07:17:02.579270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:55.425 [2024-07-25 07:17:02.584139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.425 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.425 [2024-07-25 07:17:02.634142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:55.425 [2024-07-25 07:17:02.635490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.425 [2024-07-25 07:17:02.684649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.425 [2024-07-25 07:17:02.686689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:55.425 [2024-07-25 07:17:02.734714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:55.686 Running I/O for 1 seconds... 00:10:55.686 Running I/O for 1 seconds... 00:10:55.686 Running I/O for 1 seconds... 00:10:55.686 Running I/O for 1 seconds... 00:10:56.634 00:10:56.634 Latency(us) 00:10:56.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.634 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:56.634 Nvme1n1 : 1.00 188080.99 734.69 0.00 0.00 677.53 271.36 771.41 00:10:56.634 =================================================================================================================== 00:10:56.634 Total : 188080.99 734.69 0.00 0.00 677.53 271.36 771.41 00:10:56.634 00:10:56.634 Latency(us) 00:10:56.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.634 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:56.634 Nvme1n1 : 1.00 15128.75 59.10 0.00 0.00 8436.81 4724.05 17039.36 00:10:56.634 =================================================================================================================== 00:10:56.634 Total : 15128.75 59.10 0.00 0.00 8436.81 4724.05 17039.36 00:10:56.634 00:10:56.634 Latency(us) 00:10:56.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.635 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:56.635 Nvme1n1 : 1.00 16785.69 65.57 0.00 0.00 7606.67 3932.16 23374.51 00:10:56.635 =================================================================================================================== 00:10:56.635 Total : 16785.69 65.57 0.00 0.00 7606.67 3932.16 23374.51 00:10:56.896 00:10:56.896 Latency(us) 00:10:56.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.896 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:56.896 Nvme1n1 : 1.00 13041.08 50.94 0.00 0.00 9787.61 4696.75 21080.75 00:10:56.896 =================================================================================================================== 00:10:56.896 Total : 13041.08 50.94 0.00 0.00 9787.61 4696.75 21080.75 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4143990 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4143992 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4143996 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.896 rmmod nvme_tcp 00:10:56.896 rmmod nvme_fabrics 00:10:56.896 rmmod nvme_keyring 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.896 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4143672 ']' 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4143672 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 4143672 ']' 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 4143672 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4143672 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4143672' 00:10:57.157 killing process with pid 4143672 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 4143672 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 4143672 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.157 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:59.704 00:10:59.704 real 0m12.593s 00:10:59.704 user 0m19.098s 00:10:59.704 sys 0m6.842s 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.704 ************************************ 00:10:59.704 END TEST nvmf_bdev_io_wait 00:10:59.704 ************************************ 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.704 ************************************ 00:10:59.704 START TEST nvmf_queue_depth 00:10:59.704 ************************************ 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:59.704 * Looking for test storage... 00:10:59.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.704 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.705 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:06.297 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:06.297 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:06.297 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:06.297 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.297 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.298 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:11:06.558 00:11:06.558 --- 10.0.0.2 ping statistics --- 00:11:06.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.558 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:11:06.558 00:11:06.558 --- 10.0.0.1 ping statistics --- 00:11:06.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.558 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.558 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4148409 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4148409 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 4148409 ']' 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:06.820 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:06.820 [2024-07-25 07:17:13.997247] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:11:06.820 [2024-07-25 07:17:13.997325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.820 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.820 [2024-07-25 07:17:14.087153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.820 [2024-07-25 07:17:14.180099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.820 [2024-07-25 07:17:14.180158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.820 [2024-07-25 07:17:14.180166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.820 [2024-07-25 07:17:14.180173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.820 [2024-07-25 07:17:14.180179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.820 [2024-07-25 07:17:14.180214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.764 [2024-07-25 07:17:14.831805] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.764 Malloc0 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.764 [2024-07-25 07:17:14.898436] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4148744 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4148744 /var/tmp/bdevperf.sock 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 4148744 ']' 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.764 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:07.764 [2024-07-25 07:17:14.951372] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:11:07.764 [2024-07-25 07:17:14.951425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148744 ] 00:11:07.764 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.764 [2024-07-25 07:17:15.013404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.764 [2024-07-25 07:17:15.086437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.706 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.706 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:08.706 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:08.706 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.706 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.706 NVMe0n1 00:11:08.706 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.706 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:08.706 Running I/O for 10 seconds... 00:11:20.986 00:11:20.986 Latency(us) 00:11:20.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.986 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:20.986 Verification LBA range: start 0x0 length 0x4000 00:11:20.986 NVMe0n1 : 10.06 11768.55 45.97 0.00 0.00 86674.54 24903.68 68157.44 00:11:20.986 =================================================================================================================== 00:11:20.986 Total : 11768.55 45.97 0.00 0.00 86674.54 24903.68 68157.44 00:11:20.986 0 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4148744 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 4148744 ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 4148744 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4148744 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4148744' 00:11:20.986 killing process with pid 4148744 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 4148744 00:11:20.986 Received shutdown signal, test time was about 10.000000 seconds 00:11:20.986 00:11:20.986 Latency(us) 00:11:20.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.986 =================================================================================================================== 00:11:20.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 4148744 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.986 rmmod nvme_tcp 00:11:20.986 rmmod nvme_fabrics 00:11:20.986 rmmod nvme_keyring 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4148409 ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4148409 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 4148409 ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 4148409 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4148409 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4148409' 00:11:20.986 killing process with pid 4148409 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 4148409 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 4148409 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.986 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.558 00:11:21.558 real 0m22.097s 00:11:21.558 user 0m25.828s 00:11:21.558 sys 0m6.501s 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.558 ************************************ 00:11:21.558 END TEST nvmf_queue_depth 00:11:21.558 ************************************ 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.558 ************************************ 00:11:21.558 START TEST nvmf_target_multipath 00:11:21.558 ************************************ 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:21.558 * Looking for test storage... 00:11:21.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.558 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.559 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:29.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:29.708 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:29.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.708 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:29.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.709 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.750 ms 00:11:29.709 00:11:29.709 --- 10.0.0.2 ping statistics --- 00:11:29.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.709 rtt min/avg/max/mdev = 0.750/0.750/0.750/0.000 ms 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:11:29.709 00:11:29.709 --- 10.0.0.1 ping statistics --- 00:11:29.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.709 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:29.709 only one NIC for nvmf test 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.709 rmmod nvme_tcp 00:11:29.709 rmmod nvme_fabrics 00:11:29.709 rmmod nvme_keyring 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.709 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.093 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.094 00:11:31.094 real 0m9.481s 00:11:31.094 user 0m1.979s 00:11:31.094 sys 0m5.407s 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:31.094 ************************************ 00:11:31.094 END TEST nvmf_target_multipath 00:11:31.094 ************************************ 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.094 ************************************ 00:11:31.094 START TEST nvmf_zcopy 00:11:31.094 ************************************ 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:31.094 * Looking for test storage... 00:11:31.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.094 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.356 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:37.946 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:37.946 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.946 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:37.947 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:37.947 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.947 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.209 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.209 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.209 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.209 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.209 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.209 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.209 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.470 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:11:38.471 00:11:38.471 --- 10.0.0.2 ping statistics --- 00:11:38.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.471 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:11:38.471 00:11:38.471 --- 10.0.0.1 ping statistics --- 00:11:38.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.471 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4159404 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4159404 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 4159404 ']' 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.471 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.471 [2024-07-25 07:17:45.741950] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:11:38.471 [2024-07-25 07:17:45.742023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.471 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.471 [2024-07-25 07:17:45.831225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.732 [2024-07-25 07:17:45.924096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.732 [2024-07-25 07:17:45.924157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.732 [2024-07-25 07:17:45.924165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.732 [2024-07-25 07:17:45.924172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.732 [2024-07-25 07:17:45.924178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.732 [2024-07-25 07:17:45.924212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 [2024-07-25 07:17:46.575718] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 [2024-07-25 07:17:46.591921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 malloc0 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:39.304 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:39.305 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:39.305 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:39.305 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:39.305 { 00:11:39.305 "params": { 00:11:39.305 "name": "Nvme$subsystem", 00:11:39.305 "trtype": "$TEST_TRANSPORT", 00:11:39.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.305 "adrfam": "ipv4", 00:11:39.305 "trsvcid": "$NVMF_PORT", 00:11:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.305 "hdgst": ${hdgst:-false}, 00:11:39.305 "ddgst": ${ddgst:-false} 00:11:39.305 }, 00:11:39.305 "method": "bdev_nvme_attach_controller" 00:11:39.305 } 00:11:39.305 EOF 00:11:39.305 )") 00:11:39.305 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:39.305 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:39.305 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:39.305 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:39.305 "params": { 00:11:39.305 "name": "Nvme1", 00:11:39.305 "trtype": "tcp", 00:11:39.305 "traddr": "10.0.0.2", 00:11:39.305 "adrfam": "ipv4", 00:11:39.305 "trsvcid": "4420", 00:11:39.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.305 "hdgst": false, 00:11:39.305 "ddgst": false 00:11:39.305 }, 00:11:39.305 "method": "bdev_nvme_attach_controller" 00:11:39.305 }' 00:11:39.566 [2024-07-25 07:17:46.692949] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:11:39.566 [2024-07-25 07:17:46.693014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159454 ] 00:11:39.566 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.566 [2024-07-25 07:17:46.756795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.566 [2024-07-25 07:17:46.831949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.826 Running I/O for 10 seconds... 00:11:49.834 00:11:49.834 Latency(us) 00:11:49.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.834 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:49.834 Verification LBA range: start 0x0 length 0x1000 00:11:49.834 Nvme1n1 : 10.01 9441.44 73.76 0.00 0.00 13505.60 2143.57 39758.51 00:11:49.834 =================================================================================================================== 00:11:49.834 Total : 9441.44 73.76 0.00 0.00 13505.60 2143.57 39758.51 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4161619 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:49.834 { 00:11:49.834 "params": { 00:11:49.834 "name": "Nvme$subsystem", 00:11:49.834 "trtype": "$TEST_TRANSPORT", 00:11:49.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:49.834 "adrfam": "ipv4", 00:11:49.834 "trsvcid": "$NVMF_PORT", 00:11:49.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:49.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:49.834 "hdgst": ${hdgst:-false}, 00:11:49.834 "ddgst": ${ddgst:-false} 00:11:49.834 }, 00:11:49.834 "method": "bdev_nvme_attach_controller" 00:11:49.834 } 00:11:49.834 EOF 00:11:49.834 )") 00:11:49.834 [2024-07-25 07:17:57.186987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.834 [2024-07-25 07:17:57.187017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:49.834 [2024-07-25 07:17:57.194977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.834 [2024-07-25 07:17:57.194985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:49.834 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:49.834 "params": { 00:11:49.834 "name": "Nvme1", 00:11:49.834 "trtype": "tcp", 00:11:49.834 "traddr": "10.0.0.2", 00:11:49.834 "adrfam": "ipv4", 00:11:49.834 "trsvcid": "4420", 00:11:49.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:49.834 "hdgst": false, 00:11:49.834 "ddgst": false 00:11:49.834 }, 00:11:49.834 "method": "bdev_nvme_attach_controller" 00:11:49.834 }' 00:11:50.096 [2024-07-25 07:17:57.202994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.096 [2024-07-25 07:17:57.203006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.096 [2024-07-25 07:17:57.211016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.096 [2024-07-25 07:17:57.211023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.096 [2024-07-25 07:17:57.219036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.096 [2024-07-25 07:17:57.219043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.096 [2024-07-25 07:17:57.227055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.096 [2024-07-25 07:17:57.227063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.096 [2024-07-25 07:17:57.229993] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:11:50.096 [2024-07-25 07:17:57.230045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161619 ] 00:11:50.096 [2024-07-25 07:17:57.235076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.096 [2024-07-25 07:17:57.235083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.096 [2024-07-25 07:17:57.243097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.096 [2024-07-25 07:17:57.243104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.096 [2024-07-25 07:17:57.251117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.096 [2024-07-25 07:17:57.251124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.097 [2024-07-25 07:17:57.259137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.259144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.267157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.267164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.275179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.275186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.283198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.283208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.287766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.097 [2024-07-25 07:17:57.291223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.291231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.299244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.299251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.307263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.307270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.315284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.315292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.323307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.323318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.331326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.331334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.339345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.339353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.347365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.347372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.353097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.097 [2024-07-25 07:17:57.355385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.355393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.363407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.363415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.371433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.371446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.379448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.379458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.387467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.387475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.395489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.395496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.403511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.403520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.411531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.411538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.419551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.419558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.427583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.427597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.435597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.435606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.443616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.443625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.451640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.451649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.097 [2024-07-25 07:17:57.459661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.097 [2024-07-25 07:17:57.459668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.467682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.467689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.475702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.475709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.483726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.483739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.491747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.491755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.499768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.499777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.507789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.507798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.515810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.515818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.523835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.523844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.531854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.531866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 Running I/O for 5 seconds... 00:11:50.358 [2024-07-25 07:17:57.539872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.539878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.561348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.561363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.358 [2024-07-25 07:17:57.571150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.358 [2024-07-25 07:17:57.571165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.579771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.579786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.588224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.588239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.597211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.597225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.605659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.605674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.614634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.614649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.623607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.623621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.632229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.632243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.640847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.640861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.649606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.649620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.658719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.658733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.667157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.667171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.675884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.675897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.684828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.684841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.693435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.693449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.701724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.701738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.710331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.710345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.359 [2024-07-25 07:17:57.718962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.359 [2024-07-25 07:17:57.718975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.727165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.727179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.735682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.735696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.744590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.744604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.753626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.753639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.762191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.762209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.771136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.771150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.779579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.779594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.788605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.788619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.797085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.797099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.805729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.805743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.814575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.814589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.822841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.822855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.832001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.832015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.841014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.841028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.849315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.849330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.857956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.857969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.866431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.866444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.874844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.874858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.883322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.883335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.891922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.891936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.900164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.900178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.908956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.908970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.918045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.918058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.926454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.926468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.935623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.935636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.944145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.944159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.953024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.953037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.961870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.961884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.970883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.970897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.979478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.979491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.620 [2024-07-25 07:17:57.987547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.620 [2024-07-25 07:17:57.987561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:57.996105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:57.996119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.004840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.004853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.013497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.013510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.021991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.022005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.030646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.030660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.039611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.039625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.048062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.048076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.057228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.057242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.065623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.065637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.074408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.074422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.083432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.083446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.091951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.091965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.100610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.100624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.109021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.109035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.117236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.117250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.126149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.126162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.134515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.134529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.143132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.143148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.151669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.882 [2024-07-25 07:17:58.151683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.882 [2024-07-25 07:17:58.160553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.160567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.168320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.168334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.177225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.177239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.185975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.185988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.194297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.194310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.203296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.203309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.212222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.212236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.220669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.220682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.229338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.229352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.237544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.237558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.883 [2024-07-25 07:17:58.246181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.883 [2024-07-25 07:17:58.246195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.144 [2024-07-25 07:17:58.254569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.144 [2024-07-25 07:17:58.254583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.144 [2024-07-25 07:17:58.263204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.144 [2024-07-25 07:17:58.263217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.144 [2024-07-25 07:17:58.272069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.144 [2024-07-25 07:17:58.272083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.144 [2024-07-25 07:17:58.280818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.144 [2024-07-25 07:17:58.280831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.144 [2024-07-25 07:17:58.289614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.144 [2024-07-25 07:17:58.289628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.144 [2024-07-25 07:17:58.298567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.144 [2024-07-25 07:17:58.298581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.306849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.306865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.315972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.315985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.324862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.324876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.333623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.333637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.342904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.342918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.351346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.351360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.360458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.360472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.369435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.369448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.378440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.378454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.387157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.387170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.395908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.395922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.404825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.404839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.413742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.413756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.422432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.422446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.430800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.430813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.439681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.439695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.448431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.448445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.457347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.457361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.466108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.466122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.474880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.474897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.483484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.483498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.492482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.492496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.501238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.501251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.145 [2024-07-25 07:17:58.509960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.145 [2024-07-25 07:17:58.509974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.518754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.518768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.527267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.527281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.536259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.536274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.544778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.544792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.553603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.553617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.562371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.562385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.571138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.571152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.580071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.580085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.588647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.588661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.597270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.597284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.606232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.606246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.615097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.615111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.407 [2024-07-25 07:17:58.624104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.407 [2024-07-25 07:17:58.624118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.632555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.632569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.640852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.640870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.649855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.649870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.658659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.658673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.667798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.667813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.676282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.676296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.684691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.684705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.693762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.693776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.702907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.702921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.711186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.711205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.720127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.720141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.728357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.728371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.737125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.737139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.745947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.745961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.754096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.754110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.762853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.762867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.408 [2024-07-25 07:17:58.771360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.408 [2024-07-25 07:17:58.771374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.669 [2024-07-25 07:17:58.780373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.669 [2024-07-25 07:17:58.780387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.669 [2024-07-25 07:17:58.788821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.669 [2024-07-25 07:17:58.788835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.797667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.797681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.806604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.806621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.815114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.815127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.823790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.823805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.832313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.832327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.841025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.841039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.850021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.850035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.859014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.859028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.868038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.868052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.876490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.876505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.884990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.885004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.894077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.894091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.902819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.902833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.911902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.911916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.920041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.920054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.928862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.928877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.937918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.937932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.946183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.946197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.954795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.954809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.963180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.963194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.971806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.971820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.980525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.980540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.989021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.989035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:58.997660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:58.997674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:59.006587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:59.006601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:59.015697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:59.015711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:59.024741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:59.024755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.670 [2024-07-25 07:17:59.033597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.670 [2024-07-25 07:17:59.033611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.042228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.042243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.050726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.050740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.059859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.059872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.068169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.068183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.077046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.077060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.085766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.085780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.094599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.094613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.102505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.102519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.111633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.111647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.120214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.120228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.128847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.128861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.136838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.136852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.145978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.145992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.154286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.154300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.163365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.163379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.172424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.172438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.181249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.181263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.189676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.189690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.198340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.198354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.932 [2024-07-25 07:17:59.207174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.932 [2024-07-25 07:17:59.207188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.215760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.215773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.224665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.224678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.233597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.233611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.241971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.241985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.250823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.250837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.259943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.259958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.268401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.268414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.277025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.277039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.285481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.285495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.933 [2024-07-25 07:17:59.294038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.933 [2024-07-25 07:17:59.294051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.194 [2024-07-25 07:17:59.302814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.194 [2024-07-25 07:17:59.302828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.194 [2024-07-25 07:17:59.311669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.194 [2024-07-25 07:17:59.311683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.194 [2024-07-25 07:17:59.320220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.194 [2024-07-25 07:17:59.320234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.194 [2024-07-25 07:17:59.329450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.194 [2024-07-25 07:17:59.329464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.194 [2024-07-25 07:17:59.338066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.194 [2024-07-25 07:17:59.338079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.194 [2024-07-25 07:17:59.346379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.194 [2024-07-25 07:17:59.346393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.194 [2024-07-25 07:17:59.354946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.354960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.363382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.363396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.371883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.371896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.380676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.380690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.389585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.389599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.398638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.398651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.407194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.407213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.415665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.415678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.424223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.424236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.433205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.433219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.441631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.441645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.450376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.450390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.459163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.459179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.468418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.468431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.476042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.476056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.484938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.484952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.492797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.492810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.501630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.501643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.510211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.510224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.519542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.519555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.527291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.527304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.536092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.536105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.544998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.545011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.195 [2024-07-25 07:17:59.553846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.195 [2024-07-25 07:17:59.553860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.562767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.562780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.571810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.571824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.580731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.580745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.589354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.589368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.598379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.598393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.607259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.607273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.616308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.616321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.625342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.625359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.633773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.633786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.642739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.642753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.651212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.651226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.660102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.456 [2024-07-25 07:17:59.660115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.456 [2024-07-25 07:17:59.668164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.668178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.677045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.677059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.685705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.685719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.694553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.694567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.703066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.703079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.711942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.711955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.720390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.720403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.729271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.729285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.737494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.737508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.746477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.746491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.755025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.755039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.762900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.762913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.771592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.771605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.780246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.780259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.788267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.788286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.797171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.797185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.805424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.805437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.814410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.814423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.457 [2024-07-25 07:17:59.823084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.457 [2024-07-25 07:17:59.823097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.832069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.832083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.840799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.840813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.849327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.849340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.857544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.857558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.866590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.866604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.875486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.875500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.884002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.884015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.892478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.892491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.900848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.900861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.909251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.909265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.918089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.918103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.926885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.926898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.935890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.935904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.944008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.944022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.951815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.951831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.960991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.961005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.969524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.969537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.977630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.977644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.986003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.986017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:17:59.994891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:17:59.994905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.003926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.003941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.013172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.013186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.021664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.021679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.029877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.029892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.038856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.038870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.047992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.048008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.056718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.056733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.718 [2024-07-25 07:18:00.065715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.718 [2024-07-25 07:18:00.065729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.719 [2024-07-25 07:18:00.074546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.719 [2024-07-25 07:18:00.074561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.719 [2024-07-25 07:18:00.083154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.719 [2024-07-25 07:18:00.083168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.091592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.091605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.100348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.100362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.109235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.109249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.117998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.118016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.127022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.127036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.135322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.135337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.143611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.143625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.152411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.152426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.160791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.160805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.169473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.169487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.178302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.178316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.186841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.186854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.195589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.195603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.204550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.204563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.213004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.213018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.221832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.221846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.230483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.230497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.238622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.238636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.247283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.247297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.256221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.256236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.265111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.265125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.273908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.273922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.282872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.282886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.291846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.291861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.300405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.300419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.309307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.309321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.317313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.317326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.326426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.326440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.335465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.335480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.343793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.343807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.352746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.352760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.362118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.362132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.370577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.370591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.019 [2024-07-25 07:18:00.379283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.019 [2024-07-25 07:18:00.379298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.388354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.388368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.397215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.397229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.405917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.405930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.414302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.414316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.422975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.422989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.431621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.431636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.440603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.440617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.449660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.449674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.458168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.458183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.466477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.466491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.475339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.475353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.484235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.484249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.493027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.493041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.501980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.501994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.510717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.510732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.519258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.519272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.527988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.528003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.536692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.536706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.545053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.545067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.553455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.553469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.561997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.562011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.570660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.570674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.579244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.579258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.588235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.588249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.596896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.596910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.606017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.606031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.614972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.614986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.623512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.623526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.632188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.632206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.640663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.640677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.649468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.649482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.658525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.658539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.306 [2024-07-25 07:18:00.667265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.306 [2024-07-25 07:18:00.667279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.676165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.676179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.684752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.684766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.693691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.693705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.702430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.702444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.710924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.710938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.719634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.719648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.728732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.728746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.737102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.737117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.745537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.745552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.754354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.754368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.762524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.762538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.771005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.771023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.779354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.779367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.788308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.788322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.796757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.796771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.805745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.805759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.814139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.814153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.822851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.822865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.831504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.831518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.840082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.840096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.848660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.848674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.857516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.857530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.866097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.866111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.874940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.874954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.883409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.883422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.892231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.892244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.900754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.900768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.909334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.568 [2024-07-25 07:18:00.909348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.568 [2024-07-25 07:18:00.918023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.569 [2024-07-25 07:18:00.918037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.569 [2024-07-25 07:18:00.926325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.569 [2024-07-25 07:18:00.926339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.569 [2024-07-25 07:18:00.934974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.569 [2024-07-25 07:18:00.934991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:00.943599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:00.943613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:00.952106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:00.952119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:00.961124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:00.961137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:00.969930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:00.969944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:00.978728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:00.978742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:00.987467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:00.987481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:00.996260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:00.996274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.005178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.005191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.013832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.013846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.022158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.022171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.031146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.031159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.040272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.040286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.048853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.048867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.057554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.057568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.066093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.066107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.075129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.075143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.083746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.083760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.096988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.097002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.104788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.104806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.113684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.113698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.122195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.122213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.131158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.131172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.139687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.139701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.148167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.148181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.157222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.157236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.165728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.165741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.174234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.174248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.182753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.182766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.831 [2024-07-25 07:18:01.191506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.831 [2024-07-25 07:18:01.191520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.200359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.200373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.208699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.208712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.217599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.217613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.226167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.226181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.234861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.234875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.243353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.243366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.252354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.252369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.261233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.261247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.269911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.269928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.278770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.278784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.287744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.287757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.296328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.296341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.304629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.304643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.313412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.313426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.322384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.322398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.331369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.331383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.340147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.340161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.348325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.348339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.357436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.357450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.366850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.366865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.375184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.375198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.384234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.384248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.392805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.392818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.401400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.401413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.410585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.410598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.418460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.418474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.427604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.427618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.436208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.436225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.444657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.444671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.093 [2024-07-25 07:18:01.452722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.093 [2024-07-25 07:18:01.452736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.461653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.461667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.470546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.470560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.479352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.479365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.488496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.488510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.497224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.497238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.506258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.506271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.515206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.515220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.524031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.524045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.532753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.532766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.541370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.541384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.549947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.549961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.558087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.558101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.566916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.566930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.576026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.576040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.584737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.584751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.593658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.593672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.601829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.601843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.610586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.610600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.619285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.619299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.628217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.628232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.637283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.637297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.645970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.645984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.654716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.654729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.663143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.663157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.671845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.671859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.680787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.680801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.689511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.355 [2024-07-25 07:18:01.689525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.355 [2024-07-25 07:18:01.698283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.356 [2024-07-25 07:18:01.698297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.356 [2024-07-25 07:18:01.706671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.356 [2024-07-25 07:18:01.706684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.356 [2024-07-25 07:18:01.715548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.356 [2024-07-25 07:18:01.715561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.723697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.723711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.732736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.732750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.741179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.741193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.749634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.749648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.758292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.758305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.766063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.766077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.775184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.775197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.783666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.783679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.791833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.791846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.800304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.800317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.809449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.809463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.817822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.817835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.825905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.825919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.834221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.834235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.843154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.843167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.851868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.851881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.617 [2024-07-25 07:18:01.860244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.617 [2024-07-25 07:18:01.860257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.869084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.869097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.877563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.877576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.886104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.886117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.894647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.894661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.903081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.903094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.911954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.911968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.919953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.919968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.928355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.928369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.936158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.936172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.945330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.945344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.954073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.954087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.963157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.963172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.972056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.972070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.618 [2024-07-25 07:18:01.980847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.618 [2024-07-25 07:18:01.980861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:01.989880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:01.989894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:01.998594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:01.998607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:02.007648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:02.007662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:02.016058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:02.016072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:02.024794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:02.024808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:02.033248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:02.033262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:02.041929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:02.041943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:02.050663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.879 [2024-07-25 07:18:02.050676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.879 [2024-07-25 07:18:02.058667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.058681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.067406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.067420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.076259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.076273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.084779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.084796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.093073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.093087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.101379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.101392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.110065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.110079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.118458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.118472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.126933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.126947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.135380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.135394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.144010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.144024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.152790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.152804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.161219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.161233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.170057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.170071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.178828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.178842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.187430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.187444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.195585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.195598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.204272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.204286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.213184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.213198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.222175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.222189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.230974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.230988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.880 [2024-07-25 07:18:02.239464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.880 [2024-07-25 07:18:02.239478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.248250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.248270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.257585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.257599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.266607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.266621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.274721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.274735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.283007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.283021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.292009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.292023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.301183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.301197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.309355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.309368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.141 [2024-07-25 07:18:02.318083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.141 [2024-07-25 07:18:02.318097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.326440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.326454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.335213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.335227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.343811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.343826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.352345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.352359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.360896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.360910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.369618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.369632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.378752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.378765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.387261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.387275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.395952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.395967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.404573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.404587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.413583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.413600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.422256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.422270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.430937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.430951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.439332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.439345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.448341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.448354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.457544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.457559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.465742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.465756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.474949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.474963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.483588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.483602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.492537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.492551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.142 [2024-07-25 07:18:02.501264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.142 [2024-07-25 07:18:02.501277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.510215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.510229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.523516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.523530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.531454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.531468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.540097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.540110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.548929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.548943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 00:11:55.404 Latency(us) 00:11:55.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.404 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:55.404 Nvme1n1 : 5.01 19329.09 151.01 0.00 0.00 6616.25 2443.95 28398.93 00:11:55.404 =================================================================================================================== 00:11:55.404 Total : 19329.09 151.01 0.00 0.00 6616.25 2443.95 28398.93 00:11:55.404 [2024-07-25 07:18:02.557755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.557772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.563187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.563197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.571212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.571223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.579235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.579245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.587256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.587267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.595270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.595279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.603291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.603299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.611306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.611313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.619326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.619334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.627346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.627354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.635367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.635374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.643390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.643397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.651409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.651419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.659428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.659435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.667452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.667461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.675469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.675476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 [2024-07-25 07:18:02.683489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.404 [2024-07-25 07:18:02.683496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4161619) - No such process 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4161619 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.404 delay0 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.404 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:55.404 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.666 [2024-07-25 07:18:02.780647] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:02.248 Initializing NVMe Controllers 00:12:02.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:02.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:02.248 Initialization complete. Launching workers. 00:12:02.248 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 91 00:12:02.248 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 378, failed to submit 33 00:12:02.248 success 178, unsuccess 200, failed 0 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.248 07:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.248 rmmod nvme_tcp 00:12:02.248 rmmod nvme_fabrics 00:12:02.248 rmmod nvme_keyring 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4159404 ']' 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4159404 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 4159404 ']' 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 4159404 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4159404 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4159404' 00:12:02.248 killing process with pid 4159404 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 4159404 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 4159404 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.248 07:18:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.166 00:12:04.166 real 0m32.970s 00:12:04.166 user 0m44.812s 00:12:04.166 sys 0m10.247s 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 ************************************ 00:12:04.166 END TEST nvmf_zcopy 00:12:04.166 ************************************ 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:04.166 ************************************ 00:12:04.166 START TEST nvmf_nmic 00:12:04.166 ************************************ 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:04.166 * Looking for test storage... 00:12:04.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.166 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.429 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:04.429 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:04.429 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:12:04.429 07:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.020 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:11.021 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:11.021 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:11.021 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:11.021 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:11.021 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:11.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:12:11.283 00:12:11.283 --- 10.0.0.2 ping statistics --- 00:12:11.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.283 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:12:11.283 00:12:11.283 --- 10.0.0.1 ping statistics --- 00:12:11.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.283 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4168690 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4168690 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 4168690 ']' 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.283 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.283 [2024-07-25 07:18:18.593972] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:12:11.283 [2024-07-25 07:18:18.594020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.283 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.544 [2024-07-25 07:18:18.661098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.544 [2024-07-25 07:18:18.728041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.544 [2024-07-25 07:18:18.728080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.544 [2024-07-25 07:18:18.728088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.544 [2024-07-25 07:18:18.728094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.544 [2024-07-25 07:18:18.728100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.544 [2024-07-25 07:18:18.728246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.544 [2024-07-25 07:18:18.728456] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.544 [2024-07-25 07:18:18.728458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.544 [2024-07-25 07:18:18.728307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 [2024-07-25 07:18:19.418172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 Malloc0 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.114 [2024-07-25 07:18:19.474909] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:12.114 test case1: single bdev can't be used in multiple subsystems 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.114 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.375 [2024-07-25 07:18:19.510808] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:12.375 [2024-07-25 07:18:19.510827] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:12.375 [2024-07-25 07:18:19.510835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:12.375 request: 00:12:12.375 { 00:12:12.375 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:12.375 "namespace": { 00:12:12.375 "bdev_name": "Malloc0", 00:12:12.375 "no_auto_visible": false 00:12:12.375 }, 00:12:12.375 "method": "nvmf_subsystem_add_ns", 00:12:12.375 "req_id": 1 00:12:12.375 } 00:12:12.375 Got JSON-RPC error response 00:12:12.375 response: 00:12:12.375 { 00:12:12.375 "code": -32602, 00:12:12.375 "message": "Invalid parameters" 00:12:12.375 } 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:12.375 Adding namespace failed - expected result. 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:12.375 test case2: host connect to nvmf target in multiple paths 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:12.375 [2024-07-25 07:18:19.522944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.375 07:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.758 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:15.671 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.671 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.671 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.671 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:15.671 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.581 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.581 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.581 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.581 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:17.581 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.581 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:17.581 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:17.581 [global] 00:12:17.581 thread=1 00:12:17.581 invalidate=1 00:12:17.581 rw=write 00:12:17.581 time_based=1 00:12:17.581 runtime=1 00:12:17.581 ioengine=libaio 00:12:17.581 direct=1 00:12:17.581 bs=4096 00:12:17.581 iodepth=1 00:12:17.581 norandommap=0 00:12:17.581 numjobs=1 00:12:17.581 00:12:17.581 verify_dump=1 00:12:17.581 verify_backlog=512 00:12:17.581 verify_state_save=0 00:12:17.581 do_verify=1 00:12:17.581 verify=crc32c-intel 00:12:17.581 [job0] 00:12:17.581 filename=/dev/nvme0n1 00:12:17.581 Could not set queue depth (nvme0n1) 00:12:17.581 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:17.581 fio-3.35 00:12:17.581 Starting 1 thread 00:12:18.968 00:12:18.968 job0: (groupid=0, jobs=1): err= 0: pid=4170197: Thu Jul 25 07:18:26 2024 00:12:18.968 read: IOPS=18, BW=73.0KiB/s (74.8kB/s)(76.0KiB/1041msec) 00:12:18.968 slat (nsec): min=24572, max=25555, avg=24994.68, stdev=258.95 00:12:18.968 clat (usec): min=41353, max=42048, avg=41929.36, stdev=147.53 00:12:18.968 lat (usec): min=41378, max=42073, avg=41954.35, stdev=147.58 00:12:18.968 clat percentiles (usec): 00:12:18.968 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:12:18.968 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:18.968 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:18.968 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:18.968 | 99.99th=[42206] 00:12:18.968 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:12:18.968 slat (nsec): min=9121, max=66291, avg=24864.97, stdev=11108.97 00:12:18.968 clat (usec): min=182, max=3106, avg=444.73, stdev=191.20 00:12:18.968 lat (usec): min=192, max=3139, avg=469.59, stdev=194.10 00:12:18.968 clat percentiles (usec): 00:12:18.968 | 1.00th=[ 192], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 314], 00:12:18.968 | 30.00th=[ 347], 40.00th=[ 412], 50.00th=[ 445], 60.00th=[ 465], 00:12:18.968 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[ 627], 95.00th=[ 725], 00:12:18.968 | 99.00th=[ 766], 99.50th=[ 832], 99.90th=[ 3097], 99.95th=[ 3097], 00:12:18.968 | 99.99th=[ 3097] 00:12:18.968 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:18.968 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:18.968 lat (usec) : 250=3.39%, 500=75.14%, 750=16.20%, 1000=1.32% 00:12:18.968 lat (msec) : 4=0.38%, 50=3.58% 00:12:18.968 cpu : usr=0.19%, sys=1.63%, ctx=531, majf=0, minf=1 00:12:18.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.968 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.968 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.968 00:12:18.968 Run status group 0 (all jobs): 00:12:18.968 READ: bw=73.0KiB/s (74.8kB/s), 73.0KiB/s-73.0KiB/s (74.8kB/s-74.8kB/s), io=76.0KiB (77.8kB), run=1041-1041msec 00:12:18.968 WRITE: bw=1967KiB/s (2015kB/s), 1967KiB/s-1967KiB/s (2015kB/s-2015kB/s), io=2048KiB (2097kB), run=1041-1041msec 00:12:18.968 00:12:18.968 Disk stats (read/write): 00:12:18.968 nvme0n1: ios=65/512, merge=0/0, ticks=1037/218, in_queue=1255, util=98.70% 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.968 rmmod nvme_tcp 00:12:18.968 rmmod nvme_fabrics 00:12:18.968 rmmod nvme_keyring 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4168690 ']' 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4168690 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 4168690 ']' 00:12:18.968 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 4168690 00:12:18.969 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:18.969 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.969 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4168690 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4168690' 00:12:19.230 killing process with pid 4168690 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 4168690 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 4168690 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.230 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.846 00:12:21.846 real 0m17.215s 00:12:21.846 user 0m49.561s 00:12:21.846 sys 0m5.960s 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 ************************************ 00:12:21.846 END TEST nvmf_nmic 00:12:21.846 ************************************ 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:21.846 ************************************ 00:12:21.846 START TEST nvmf_fio_target 00:12:21.846 ************************************ 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:21.846 * Looking for test storage... 00:12:21.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.846 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.847 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:28.438 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:28.438 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.438 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:28.438 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:28.439 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.439 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.700 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.700 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.700 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.700 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.700 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.700 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.700 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:12:28.700 00:12:28.700 --- 10.0.0.2 ping statistics --- 00:12:28.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.700 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:12:28.700 00:12:28.700 --- 10.0.0.1 ping statistics --- 00:12:28.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.700 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4174570 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4174570 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.700 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 4174570 ']' 00:12:28.701 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.701 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.701 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.701 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.701 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.961 [2024-07-25 07:18:36.105235] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:12:28.961 [2024-07-25 07:18:36.105304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.961 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.961 [2024-07-25 07:18:36.177362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.961 [2024-07-25 07:18:36.252536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.961 [2024-07-25 07:18:36.252575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.961 [2024-07-25 07:18:36.252582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.961 [2024-07-25 07:18:36.252588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.961 [2024-07-25 07:18:36.252594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.961 [2024-07-25 07:18:36.252731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.961 [2024-07-25 07:18:36.252865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.961 [2024-07-25 07:18:36.253022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.961 [2024-07-25 07:18:36.253023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.532 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.532 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:29.532 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.532 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.532 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.793 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.793 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:29.793 [2024-07-25 07:18:37.072682] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.793 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:30.053 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:30.053 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:30.315 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:30.315 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:30.315 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:30.315 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:30.575 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:30.575 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:30.835 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:30.835 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:30.835 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:31.095 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:31.096 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:31.356 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:31.356 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:31.356 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.617 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:31.617 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:31.877 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:31.877 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:31.877 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.138 [2024-07-25 07:18:39.353965] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.138 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:32.399 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:32.399 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.313 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:34.313 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:34.313 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.313 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:34.313 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:34.313 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:36.227 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:36.227 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:36.227 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.227 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:36.227 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.227 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:36.227 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:36.227 [global] 00:12:36.227 thread=1 00:12:36.227 invalidate=1 00:12:36.227 rw=write 00:12:36.227 time_based=1 00:12:36.227 runtime=1 00:12:36.227 ioengine=libaio 00:12:36.227 direct=1 00:12:36.227 bs=4096 00:12:36.227 iodepth=1 00:12:36.227 norandommap=0 00:12:36.227 numjobs=1 00:12:36.227 00:12:36.227 verify_dump=1 00:12:36.227 verify_backlog=512 00:12:36.227 verify_state_save=0 00:12:36.227 do_verify=1 00:12:36.227 verify=crc32c-intel 00:12:36.227 [job0] 00:12:36.227 filename=/dev/nvme0n1 00:12:36.227 [job1] 00:12:36.227 filename=/dev/nvme0n2 00:12:36.227 [job2] 00:12:36.227 filename=/dev/nvme0n3 00:12:36.227 [job3] 00:12:36.227 filename=/dev/nvme0n4 00:12:36.227 Could not set queue depth (nvme0n1) 00:12:36.227 Could not set queue depth (nvme0n2) 00:12:36.227 Could not set queue depth (nvme0n3) 00:12:36.227 Could not set queue depth (nvme0n4) 00:12:36.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.488 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.488 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.488 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.488 fio-3.35 00:12:36.488 Starting 4 threads 00:12:37.875 00:12:37.875 job0: (groupid=0, jobs=1): err= 0: pid=4176292: Thu Jul 25 07:18:44 2024 00:12:37.875 read: IOPS=11, BW=47.9KiB/s (49.0kB/s)(48.0KiB/1003msec) 00:12:37.875 slat (nsec): min=24517, max=24919, avg=24687.25, stdev=100.31 00:12:37.875 clat (usec): min=41432, max=42082, avg=41915.11, stdev=181.21 00:12:37.875 lat (usec): min=41456, max=42106, avg=41939.80, stdev=181.21 00:12:37.875 clat percentiles (usec): 00:12:37.875 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:37.875 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:37.875 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:37.875 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:37.875 | 99.99th=[42206] 00:12:37.875 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:12:37.875 slat (nsec): min=10750, max=54195, avg=33034.59, stdev=3386.63 00:12:37.875 clat (usec): min=691, max=1214, avg=929.43, stdev=78.44 00:12:37.875 lat (usec): min=725, max=1247, avg=962.47, stdev=78.71 00:12:37.875 clat percentiles (usec): 00:12:37.875 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 857], 00:12:37.875 | 30.00th=[ 889], 40.00th=[ 922], 50.00th=[ 938], 60.00th=[ 955], 00:12:37.876 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1012], 95.00th=[ 1045], 00:12:37.876 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:12:37.876 | 99.99th=[ 1221] 00:12:37.876 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:12:37.876 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:37.876 lat (usec) : 750=1.15%, 1000=81.30% 00:12:37.876 lat (msec) : 2=15.27%, 50=2.29% 00:12:37.876 cpu : usr=1.10%, sys=1.40%, ctx=527, majf=0, minf=1 00:12:37.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.876 job1: (groupid=0, jobs=1): err= 0: pid=4176309: Thu Jul 25 07:18:44 2024 00:12:37.876 read: IOPS=400, BW=1602KiB/s (1641kB/s)(1604KiB/1001msec) 00:12:37.876 slat (nsec): min=8420, max=50017, avg=25189.14, stdev=5236.79 00:12:37.876 clat (usec): min=1086, max=1556, avg=1389.63, stdev=77.80 00:12:37.876 lat (usec): min=1103, max=1581, avg=1414.82, stdev=78.19 00:12:37.876 clat percentiles (usec): 00:12:37.876 | 1.00th=[ 1156], 5.00th=[ 1237], 10.00th=[ 1303], 20.00th=[ 1336], 00:12:37.876 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[ 1401], 60.00th=[ 1418], 00:12:37.876 | 70.00th=[ 1434], 80.00th=[ 1450], 90.00th=[ 1483], 95.00th=[ 1500], 00:12:37.876 | 99.00th=[ 1532], 99.50th=[ 1532], 99.90th=[ 1565], 99.95th=[ 1565], 00:12:37.876 | 99.99th=[ 1565] 00:12:37.876 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:37.876 slat (usec): min=10, max=9448, avg=49.26, stdev=416.28 00:12:37.876 clat (usec): min=376, max=1055, avg=775.67, stdev=127.44 00:12:37.876 lat (usec): min=387, max=10123, avg=824.93, stdev=431.59 00:12:37.876 clat percentiles (usec): 00:12:37.876 | 1.00th=[ 502], 5.00th=[ 586], 10.00th=[ 619], 20.00th=[ 660], 00:12:37.876 | 30.00th=[ 709], 40.00th=[ 742], 50.00th=[ 766], 60.00th=[ 791], 00:12:37.876 | 70.00th=[ 832], 80.00th=[ 898], 90.00th=[ 955], 95.00th=[ 996], 00:12:37.876 | 99.00th=[ 1029], 99.50th=[ 1045], 99.90th=[ 1057], 99.95th=[ 1057], 00:12:37.876 | 99.99th=[ 1057] 00:12:37.876 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:12:37.876 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:37.876 lat (usec) : 500=0.44%, 750=23.55%, 1000=29.90% 00:12:37.876 lat (msec) : 2=46.11% 00:12:37.876 cpu : usr=1.00%, sys=3.10%, ctx=915, majf=0, minf=1 00:12:37.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 issued rwts: total=401,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.876 job2: (groupid=0, jobs=1): err= 0: pid=4176328: Thu Jul 25 07:18:44 2024 00:12:37.876 read: IOPS=416, BW=1666KiB/s (1706kB/s)(1668KiB/1001msec) 00:12:37.876 slat (nsec): min=24335, max=59450, avg=25363.94, stdev=3220.09 00:12:37.876 clat (usec): min=1064, max=1414, avg=1268.76, stdev=56.30 00:12:37.876 lat (usec): min=1089, max=1439, avg=1294.12, stdev=56.41 00:12:37.876 clat percentiles (usec): 00:12:37.876 | 1.00th=[ 1090], 5.00th=[ 1139], 10.00th=[ 1205], 20.00th=[ 1237], 00:12:37.876 | 30.00th=[ 1254], 40.00th=[ 1270], 50.00th=[ 1270], 60.00th=[ 1287], 00:12:37.876 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1336], 95.00th=[ 1352], 00:12:37.876 | 99.00th=[ 1385], 99.50th=[ 1401], 99.90th=[ 1418], 99.95th=[ 1418], 00:12:37.876 | 99.99th=[ 1418] 00:12:37.876 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:37.876 slat (nsec): min=9600, max=68218, avg=30266.64, stdev=7988.75 00:12:37.876 clat (usec): min=603, max=1170, avg=846.84, stdev=102.26 00:12:37.876 lat (usec): min=615, max=1202, avg=877.11, stdev=103.79 00:12:37.876 clat percentiles (usec): 00:12:37.876 | 1.00th=[ 635], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 766], 00:12:37.876 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 865], 00:12:37.876 | 70.00th=[ 922], 80.00th=[ 955], 90.00th=[ 979], 95.00th=[ 1004], 00:12:37.876 | 99.00th=[ 1074], 99.50th=[ 1074], 99.90th=[ 1172], 99.95th=[ 1172], 00:12:37.876 | 99.99th=[ 1172] 00:12:37.876 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:12:37.876 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:37.876 lat (usec) : 750=8.18%, 1000=43.70% 00:12:37.876 lat (msec) : 2=48.12% 00:12:37.876 cpu : usr=1.00%, sys=3.20%, ctx=932, majf=0, minf=1 00:12:37.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 issued rwts: total=417,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.876 job3: (groupid=0, jobs=1): err= 0: pid=4176335: Thu Jul 25 07:18:44 2024 00:12:37.876 read: IOPS=13, BW=55.5KiB/s (56.8kB/s)(56.0KiB/1009msec) 00:12:37.876 slat (nsec): min=26163, max=28602, avg=26570.07, stdev=621.96 00:12:37.876 clat (usec): min=41879, max=42139, avg=41970.48, stdev=59.50 00:12:37.876 lat (usec): min=41905, max=42166, avg=41997.05, stdev=59.46 00:12:37.876 clat percentiles (usec): 00:12:37.876 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:37.876 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:37.876 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:37.876 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:37.876 | 99.99th=[42206] 00:12:37.876 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:12:37.876 slat (usec): min=9, max=3640, avg=44.56, stdev=186.35 00:12:37.876 clat (usec): min=489, max=995, avg=762.79, stdev=90.78 00:12:37.876 lat (usec): min=499, max=4444, avg=807.36, stdev=211.49 00:12:37.876 clat percentiles (usec): 00:12:37.876 | 1.00th=[ 515], 5.00th=[ 603], 10.00th=[ 660], 20.00th=[ 693], 00:12:37.876 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[ 791], 00:12:37.876 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 898], 00:12:37.876 | 99.00th=[ 955], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 996], 00:12:37.876 | 99.99th=[ 996] 00:12:37.876 bw ( KiB/s): min= 4096, max= 4096, per=50.45%, avg=4096.00, stdev= 0.00, samples=1 00:12:37.876 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:37.876 lat (usec) : 500=0.57%, 750=42.78%, 1000=53.99% 00:12:37.876 lat (msec) : 50=2.66% 00:12:37.876 cpu : usr=0.79%, sys=2.38%, ctx=530, majf=0, minf=1 00:12:37.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.876 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.876 00:12:37.876 Run status group 0 (all jobs): 00:12:37.876 READ: bw=3346KiB/s (3426kB/s), 47.9KiB/s-1666KiB/s (49.0kB/s-1706kB/s), io=3376KiB (3457kB), run=1001-1009msec 00:12:37.876 WRITE: bw=8119KiB/s (8314kB/s), 2030KiB/s-2046KiB/s (2078kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1009msec 00:12:37.876 00:12:37.876 Disk stats (read/write): 00:12:37.876 nvme0n1: ios=30/512, merge=0/0, ticks=1177/434, in_queue=1611, util=84.07% 00:12:37.876 nvme0n2: ios=335/512, merge=0/0, ticks=671/386, in_queue=1057, util=91.12% 00:12:37.876 nvme0n3: ios=348/512, merge=0/0, ticks=483/384, in_queue=867, util=95.25% 00:12:37.876 nvme0n4: ios=69/512, merge=0/0, ticks=575/325, in_queue=900, util=97.22% 00:12:37.876 07:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:37.876 [global] 00:12:37.876 thread=1 00:12:37.876 invalidate=1 00:12:37.876 rw=randwrite 00:12:37.876 time_based=1 00:12:37.876 runtime=1 00:12:37.876 ioengine=libaio 00:12:37.876 direct=1 00:12:37.876 bs=4096 00:12:37.876 iodepth=1 00:12:37.876 norandommap=0 00:12:37.876 numjobs=1 00:12:37.876 00:12:37.876 verify_dump=1 00:12:37.876 verify_backlog=512 00:12:37.876 verify_state_save=0 00:12:37.876 do_verify=1 00:12:37.876 verify=crc32c-intel 00:12:37.876 [job0] 00:12:37.876 filename=/dev/nvme0n1 00:12:37.876 [job1] 00:12:37.876 filename=/dev/nvme0n2 00:12:37.876 [job2] 00:12:37.876 filename=/dev/nvme0n3 00:12:37.876 [job3] 00:12:37.876 filename=/dev/nvme0n4 00:12:37.876 Could not set queue depth (nvme0n1) 00:12:37.876 Could not set queue depth (nvme0n2) 00:12:37.876 Could not set queue depth (nvme0n3) 00:12:37.876 Could not set queue depth (nvme0n4) 00:12:38.138 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.138 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.138 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.138 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.138 fio-3.35 00:12:38.138 Starting 4 threads 00:12:39.542 00:12:39.542 job0: (groupid=0, jobs=1): err= 0: pid=4176775: Thu Jul 25 07:18:46 2024 00:12:39.542 read: IOPS=354, BW=1419KiB/s (1453kB/s)(1420KiB/1001msec) 00:12:39.542 slat (nsec): min=24141, max=58483, avg=25517.80, stdev=3813.91 00:12:39.542 clat (usec): min=1197, max=1616, avg=1391.05, stdev=57.56 00:12:39.542 lat (usec): min=1221, max=1641, avg=1416.57, stdev=57.71 00:12:39.542 clat percentiles (usec): 00:12:39.542 | 1.00th=[ 1237], 5.00th=[ 1287], 10.00th=[ 1319], 20.00th=[ 1352], 00:12:39.542 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[ 1401], 60.00th=[ 1401], 00:12:39.542 | 70.00th=[ 1418], 80.00th=[ 1434], 90.00th=[ 1450], 95.00th=[ 1483], 00:12:39.542 | 99.00th=[ 1532], 99.50th=[ 1565], 99.90th=[ 1614], 99.95th=[ 1614], 00:12:39.542 | 99.99th=[ 1614] 00:12:39.542 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:39.542 slat (nsec): min=9632, max=68719, avg=31819.32, stdev=4007.28 00:12:39.542 clat (usec): min=567, max=1144, avg=925.43, stdev=78.08 00:12:39.542 lat (usec): min=599, max=1176, avg=957.24, stdev=78.76 00:12:39.542 clat percentiles (usec): 00:12:39.542 | 1.00th=[ 709], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 865], 00:12:39.542 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 938], 60.00th=[ 955], 00:12:39.542 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1012], 95.00th=[ 1029], 00:12:39.542 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1139], 99.95th=[ 1139], 00:12:39.542 | 99.99th=[ 1139] 00:12:39.542 bw ( KiB/s): min= 4096, max= 4096, per=50.10%, avg=4096.00, stdev= 0.00, samples=1 00:12:39.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:39.542 lat (usec) : 750=1.96%, 1000=49.48% 00:12:39.542 lat (msec) : 2=48.56% 00:12:39.542 cpu : usr=1.50%, sys=2.40%, ctx=870, majf=0, minf=1 00:12:39.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 issued rwts: total=355,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.543 job1: (groupid=0, jobs=1): err= 0: pid=4176790: Thu Jul 25 07:18:46 2024 00:12:39.543 read: IOPS=369, BW=1479KiB/s (1514kB/s)(1480KiB/1001msec) 00:12:39.543 slat (nsec): min=26289, max=44863, avg=27314.84, stdev=2427.81 00:12:39.543 clat (usec): min=813, max=4979, avg=1261.86, stdev=213.21 00:12:39.543 lat (usec): min=840, max=5011, avg=1289.18, stdev=213.33 00:12:39.543 clat percentiles (usec): 00:12:39.543 | 1.00th=[ 1012], 5.00th=[ 1106], 10.00th=[ 1139], 20.00th=[ 1205], 00:12:39.543 | 30.00th=[ 1221], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1270], 00:12:39.543 | 70.00th=[ 1287], 80.00th=[ 1319], 90.00th=[ 1352], 95.00th=[ 1385], 00:12:39.543 | 99.00th=[ 1467], 99.50th=[ 1713], 99.90th=[ 4948], 99.95th=[ 4948], 00:12:39.543 | 99.99th=[ 4948] 00:12:39.543 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:39.543 slat (usec): min=10, max=25223, avg=83.49, stdev=1113.24 00:12:39.543 clat (usec): min=560, max=4017, avg=924.01, stdev=173.78 00:12:39.543 lat (usec): min=597, max=26462, avg=1007.51, stdev=1140.40 00:12:39.543 clat percentiles (usec): 00:12:39.543 | 1.00th=[ 611], 5.00th=[ 734], 10.00th=[ 783], 20.00th=[ 840], 00:12:39.543 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[ 930], 60.00th=[ 955], 00:12:39.543 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:12:39.543 | 99.00th=[ 1221], 99.50th=[ 1352], 99.90th=[ 4015], 99.95th=[ 4015], 00:12:39.543 | 99.99th=[ 4015] 00:12:39.543 bw ( KiB/s): min= 3928, max= 3928, per=48.05%, avg=3928.00, stdev= 0.00, samples=1 00:12:39.543 iops : min= 982, max= 982, avg=982.00, stdev= 0.00, samples=1 00:12:39.543 lat (usec) : 750=3.97%, 1000=43.88% 00:12:39.543 lat (msec) : 2=51.93%, 10=0.23% 00:12:39.543 cpu : usr=2.10%, sys=3.60%, ctx=885, majf=0, minf=1 00:12:39.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 issued rwts: total=370,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.543 job2: (groupid=0, jobs=1): err= 0: pid=4176809: Thu Jul 25 07:18:46 2024 00:12:39.543 read: IOPS=11, BW=47.9KiB/s (49.1kB/s)(48.0KiB/1002msec) 00:12:39.543 slat (nsec): min=25376, max=26303, avg=25694.42, stdev=238.45 00:12:39.543 clat (usec): min=41862, max=42141, avg=41983.26, stdev=81.71 00:12:39.543 lat (usec): min=41888, max=42167, avg=42008.96, stdev=81.82 00:12:39.543 clat percentiles (usec): 00:12:39.543 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:39.543 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:39.543 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:39.543 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:39.543 | 99.99th=[42206] 00:12:39.543 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:12:39.543 slat (nsec): min=9482, max=78327, avg=31646.84, stdev=4494.84 00:12:39.543 clat (usec): min=624, max=1236, avg=931.78, stdev=83.60 00:12:39.543 lat (usec): min=636, max=1267, avg=963.43, stdev=84.56 00:12:39.543 clat percentiles (usec): 00:12:39.543 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 857], 00:12:39.543 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 963], 00:12:39.543 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:12:39.543 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1237], 99.95th=[ 1237], 00:12:39.543 | 99.99th=[ 1237] 00:12:39.543 bw ( KiB/s): min= 4096, max= 4096, per=50.10%, avg=4096.00, stdev= 0.00, samples=1 00:12:39.543 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:39.543 lat (usec) : 750=2.86%, 1000=76.72% 00:12:39.543 lat (msec) : 2=18.13%, 50=2.29% 00:12:39.543 cpu : usr=1.30%, sys=1.90%, ctx=525, majf=0, minf=1 00:12:39.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.543 job3: (groupid=0, jobs=1): err= 0: pid=4176816: Thu Jul 25 07:18:46 2024 00:12:39.543 read: IOPS=13, BW=55.9KiB/s (57.2kB/s)(56.0KiB/1002msec) 00:12:39.543 slat (nsec): min=25269, max=27089, avg=26022.00, stdev=590.84 00:12:39.543 clat (usec): min=1231, max=42937, avg=36370.82, stdev=14868.28 00:12:39.543 lat (usec): min=1256, max=42962, avg=36396.85, stdev=14868.52 00:12:39.543 clat percentiles (usec): 00:12:39.543 | 1.00th=[ 1237], 5.00th=[ 1237], 10.00th=[ 1352], 20.00th=[41681], 00:12:39.543 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:39.543 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:12:39.543 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:39.543 | 99.99th=[42730] 00:12:39.543 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:12:39.543 slat (nsec): min=9483, max=51835, avg=32960.34, stdev=4729.31 00:12:39.543 clat (usec): min=379, max=1313, avg=919.12, stdev=119.37 00:12:39.543 lat (usec): min=412, max=1345, avg=952.08, stdev=120.22 00:12:39.543 clat percentiles (usec): 00:12:39.543 | 1.00th=[ 498], 5.00th=[ 709], 10.00th=[ 766], 20.00th=[ 840], 00:12:39.543 | 30.00th=[ 889], 40.00th=[ 922], 50.00th=[ 947], 60.00th=[ 963], 00:12:39.543 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1074], 00:12:39.543 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1319], 99.95th=[ 1319], 00:12:39.543 | 99.99th=[ 1319] 00:12:39.543 bw ( KiB/s): min= 4096, max= 4096, per=50.10%, avg=4096.00, stdev= 0.00, samples=1 00:12:39.543 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:39.543 lat (usec) : 500=1.14%, 750=7.22%, 1000=69.01% 00:12:39.543 lat (msec) : 2=20.34%, 50=2.28% 00:12:39.543 cpu : usr=1.20%, sys=2.00%, ctx=528, majf=0, minf=1 00:12:39.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.543 issued rwts: total=14,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.543 00:12:39.543 Run status group 0 (all jobs): 00:12:39.543 READ: bw=2998KiB/s (3070kB/s), 47.9KiB/s-1479KiB/s (49.1kB/s-1514kB/s), io=3004KiB (3076kB), run=1001-1002msec 00:12:39.543 WRITE: bw=8176KiB/s (8372kB/s), 2044KiB/s-2046KiB/s (2093kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1002msec 00:12:39.543 00:12:39.543 Disk stats (read/write): 00:12:39.543 nvme0n1: ios=288/512, merge=0/0, ticks=1228/451, in_queue=1679, util=92.38% 00:12:39.543 nvme0n2: ios=277/512, merge=0/0, ticks=1309/382, in_queue=1691, util=98.88% 00:12:39.543 nvme0n3: ios=60/512, merge=0/0, ticks=454/405, in_queue=859, util=93.44% 00:12:39.543 nvme0n4: ios=49/512, merge=0/0, ticks=1243/452, in_queue=1695, util=99.14% 00:12:39.543 07:18:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:39.543 [global] 00:12:39.543 thread=1 00:12:39.543 invalidate=1 00:12:39.543 rw=write 00:12:39.543 time_based=1 00:12:39.543 runtime=1 00:12:39.543 ioengine=libaio 00:12:39.543 direct=1 00:12:39.543 bs=4096 00:12:39.543 iodepth=128 00:12:39.543 norandommap=0 00:12:39.543 numjobs=1 00:12:39.543 00:12:39.543 verify_dump=1 00:12:39.543 verify_backlog=512 00:12:39.543 verify_state_save=0 00:12:39.543 do_verify=1 00:12:39.543 verify=crc32c-intel 00:12:39.543 [job0] 00:12:39.543 filename=/dev/nvme0n1 00:12:39.543 [job1] 00:12:39.543 filename=/dev/nvme0n2 00:12:39.543 [job2] 00:12:39.543 filename=/dev/nvme0n3 00:12:39.543 [job3] 00:12:39.543 filename=/dev/nvme0n4 00:12:39.543 Could not set queue depth (nvme0n1) 00:12:39.543 Could not set queue depth (nvme0n2) 00:12:39.543 Could not set queue depth (nvme0n3) 00:12:39.543 Could not set queue depth (nvme0n4) 00:12:39.809 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:39.809 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:39.809 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:39.809 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:39.809 fio-3.35 00:12:39.809 Starting 4 threads 00:12:41.223 00:12:41.223 job0: (groupid=0, jobs=1): err= 0: pid=4177258: Thu Jul 25 07:18:48 2024 00:12:41.223 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:12:41.223 slat (nsec): min=952, max=20001k, avg=135602.87, stdev=869623.37 00:12:41.223 clat (usec): min=5011, max=47328, avg=14930.48, stdev=7148.70 00:12:41.223 lat (usec): min=5018, max=47338, avg=15066.08, stdev=7232.28 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 6783], 5.00th=[ 8586], 10.00th=[ 8586], 20.00th=[ 9110], 00:12:41.223 | 30.00th=[10421], 40.00th=[11994], 50.00th=[13042], 60.00th=[15401], 00:12:41.223 | 70.00th=[16450], 80.00th=[17433], 90.00th=[24511], 95.00th=[31327], 00:12:41.223 | 99.00th=[40109], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:12:41.223 | 99.99th=[47449] 00:12:41.223 write: IOPS=3764, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1011msec); 0 zone resets 00:12:41.223 slat (nsec): min=1671, max=7904.6k, avg=128244.25, stdev=532130.65 00:12:41.223 clat (usec): min=1316, max=57161, avg=19645.49, stdev=11003.41 00:12:41.223 lat (usec): min=1328, max=57170, avg=19773.73, stdev=11063.26 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 3163], 5.00th=[ 4948], 10.00th=[ 5735], 20.00th=[11469], 00:12:41.223 | 30.00th=[13698], 40.00th=[15926], 50.00th=[18220], 60.00th=[19792], 00:12:41.223 | 70.00th=[22414], 80.00th=[26084], 90.00th=[34866], 95.00th=[44303], 00:12:41.223 | 99.00th=[53216], 99.50th=[53740], 99.90th=[56886], 99.95th=[57410], 00:12:41.223 | 99.99th=[57410] 00:12:41.223 bw ( KiB/s): min=13040, max=16384, per=17.52%, avg=14712.00, stdev=2364.57, samples=2 00:12:41.223 iops : min= 3260, max= 4096, avg=3678.00, stdev=591.14, samples=2 00:12:41.223 lat (msec) : 2=0.26%, 4=1.33%, 10=21.83%, 20=49.12%, 50=26.24% 00:12:41.223 lat (msec) : 100=1.23% 00:12:41.223 cpu : usr=3.56%, sys=3.66%, ctx=478, majf=0, minf=1 00:12:41.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:41.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.223 issued rwts: total=3584,3806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.223 job1: (groupid=0, jobs=1): err= 0: pid=4177268: Thu Jul 25 07:18:48 2024 00:12:41.223 read: IOPS=4491, BW=17.5MiB/s (18.4MB/s)(18.0MiB/1026msec) 00:12:41.223 slat (nsec): min=926, max=24470k, avg=75248.45, stdev=727999.55 00:12:41.223 clat (usec): min=2803, max=65064, avg=11489.37, stdev=7347.79 00:12:41.223 lat (usec): min=2805, max=65071, avg=11564.61, stdev=7400.07 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 4424], 5.00th=[ 5014], 10.00th=[ 5866], 20.00th=[ 7177], 00:12:41.223 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 9241], 60.00th=[10290], 00:12:41.223 | 70.00th=[11076], 80.00th=[13960], 90.00th=[22414], 95.00th=[26346], 00:12:41.223 | 99.00th=[41681], 99.50th=[41681], 99.90th=[65274], 99.95th=[65274], 00:12:41.223 | 99.99th=[65274] 00:12:41.223 write: IOPS=4823, BW=18.8MiB/s (19.8MB/s)(19.3MiB/1026msec); 0 zone resets 00:12:41.223 slat (nsec): min=1610, max=12608k, avg=98385.08, stdev=542312.34 00:12:41.223 clat (usec): min=1389, max=57481, avg=15305.50, stdev=10228.73 00:12:41.223 lat (usec): min=1418, max=57504, avg=15403.89, stdev=10277.36 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 3261], 5.00th=[ 5276], 10.00th=[ 6259], 20.00th=[ 7701], 00:12:41.223 | 30.00th=[ 8291], 40.00th=[10290], 50.00th=[12125], 60.00th=[15008], 00:12:41.223 | 70.00th=[17171], 80.00th=[21103], 90.00th=[28705], 95.00th=[38536], 00:12:41.223 | 99.00th=[53740], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:12:41.223 | 99.99th=[57410] 00:12:41.223 bw ( KiB/s): min=16384, max=22192, per=22.96%, avg=19288.00, stdev=4106.88, samples=2 00:12:41.223 iops : min= 4096, max= 5548, avg=4822.00, stdev=1026.72, samples=2 00:12:41.223 lat (msec) : 2=0.12%, 4=0.72%, 10=45.77%, 20=36.42%, 50=16.10% 00:12:41.223 lat (msec) : 100=0.87% 00:12:41.223 cpu : usr=2.73%, sys=5.76%, ctx=597, majf=0, minf=1 00:12:41.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:41.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.223 issued rwts: total=4608,4949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.223 job2: (groupid=0, jobs=1): err= 0: pid=4177291: Thu Jul 25 07:18:48 2024 00:12:41.223 read: IOPS=3924, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1007msec) 00:12:41.223 slat (nsec): min=927, max=13634k, avg=102230.26, stdev=725154.16 00:12:41.223 clat (usec): min=5882, max=38211, avg=13241.34, stdev=5821.68 00:12:41.223 lat (usec): min=5889, max=38220, avg=13343.57, stdev=5876.19 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 6259], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8586], 00:12:41.223 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11600], 60.00th=[13173], 00:12:41.223 | 70.00th=[15270], 80.00th=[17171], 90.00th=[20579], 95.00th=[24511], 00:12:41.223 | 99.00th=[34341], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:12:41.223 | 99.99th=[38011] 00:12:41.223 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:12:41.223 slat (nsec): min=1677, max=23239k, avg=140283.84, stdev=749925.76 00:12:41.223 clat (usec): min=3426, max=72342, avg=17673.27, stdev=11612.95 00:12:41.223 lat (usec): min=4297, max=72352, avg=17813.56, stdev=11677.11 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 5604], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 8455], 00:12:41.223 | 30.00th=[10814], 40.00th=[12780], 50.00th=[13829], 60.00th=[16712], 00:12:41.223 | 70.00th=[20317], 80.00th=[25560], 90.00th=[29492], 95.00th=[40109], 00:12:41.223 | 99.00th=[66847], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:12:41.223 | 99.99th=[71828] 00:12:41.223 bw ( KiB/s): min=12288, max=20480, per=19.51%, avg=16384.00, stdev=5792.62, samples=2 00:12:41.223 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:12:41.223 lat (msec) : 4=0.01%, 10=31.59%, 20=47.59%, 50=19.43%, 100=1.38% 00:12:41.223 cpu : usr=3.08%, sys=4.37%, ctx=447, majf=0, minf=1 00:12:41.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:41.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.223 issued rwts: total=3952,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.223 job3: (groupid=0, jobs=1): err= 0: pid=4177299: Thu Jul 25 07:18:48 2024 00:12:41.223 read: IOPS=8135, BW=31.8MiB/s (33.3MB/s)(32.0MiB/1007msec) 00:12:41.223 slat (nsec): min=970, max=7071.9k, avg=58327.02, stdev=409279.08 00:12:41.223 clat (usec): min=2866, max=15280, avg=7631.38, stdev=1813.98 00:12:41.223 lat (usec): min=3253, max=15287, avg=7689.71, stdev=1831.59 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 4621], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 6325], 00:12:41.223 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 7504], 00:12:41.223 | 70.00th=[ 8160], 80.00th=[ 8979], 90.00th=[10159], 95.00th=[11469], 00:12:41.223 | 99.00th=[12780], 99.50th=[14222], 99.90th=[15270], 99.95th=[15270], 00:12:41.223 | 99.99th=[15270] 00:12:41.223 write: IOPS=8633, BW=33.7MiB/s (35.4MB/s)(34.0MiB/1007msec); 0 zone resets 00:12:41.223 slat (nsec): min=1634, max=5021.3k, avg=56228.87, stdev=292789.30 00:12:41.223 clat (usec): min=1215, max=27507, avg=7503.50, stdev=3239.85 00:12:41.223 lat (usec): min=1226, max=27509, avg=7559.73, stdev=3256.35 00:12:41.223 clat percentiles (usec): 00:12:41.223 | 1.00th=[ 2900], 5.00th=[ 3884], 10.00th=[ 4490], 20.00th=[ 5276], 00:12:41.223 | 30.00th=[ 5932], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 7373], 00:12:41.223 | 70.00th=[ 8160], 80.00th=[ 9241], 90.00th=[11207], 95.00th=[13042], 00:12:41.223 | 99.00th=[22414], 99.50th=[23987], 99.90th=[26608], 99.95th=[27395], 00:12:41.223 | 99.99th=[27395] 00:12:41.224 bw ( KiB/s): min=31672, max=36864, per=40.80%, avg=34268.00, stdev=3671.30, samples=2 00:12:41.224 iops : min= 7918, max= 9216, avg=8567.00, stdev=917.82, samples=2 00:12:41.224 lat (msec) : 2=0.02%, 4=3.43%, 10=82.78%, 20=13.06%, 50=0.70% 00:12:41.224 cpu : usr=4.67%, sys=6.76%, ctx=851, majf=0, minf=1 00:12:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.224 issued rwts: total=8192,8694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.224 00:12:41.224 Run status group 0 (all jobs): 00:12:41.224 READ: bw=77.4MiB/s (81.2MB/s), 13.8MiB/s-31.8MiB/s (14.5MB/s-33.3MB/s), io=79.4MiB (83.3MB), run=1007-1026msec 00:12:41.224 WRITE: bw=82.0MiB/s (86.0MB/s), 14.7MiB/s-33.7MiB/s (15.4MB/s-35.4MB/s), io=84.2MiB (88.2MB), run=1007-1026msec 00:12:41.224 00:12:41.224 Disk stats (read/write): 00:12:41.224 nvme0n1: ios=2871/3072, merge=0/0, ticks=43497/60595, in_queue=104092, util=98.40% 00:12:41.224 nvme0n2: ios=4007/4096, merge=0/0, ticks=40404/48035, in_queue=88439, util=94.50% 00:12:41.224 nvme0n3: ios=3092/3127, merge=0/0, ticks=42171/57534, in_queue=99705, util=97.26% 00:12:41.224 nvme0n4: ios=7211/7338, merge=0/0, ticks=53362/49183, in_queue=102545, util=98.19% 00:12:41.224 07:18:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:41.224 [global] 00:12:41.224 thread=1 00:12:41.224 invalidate=1 00:12:41.224 rw=randwrite 00:12:41.224 time_based=1 00:12:41.224 runtime=1 00:12:41.224 ioengine=libaio 00:12:41.224 direct=1 00:12:41.224 bs=4096 00:12:41.224 iodepth=128 00:12:41.224 norandommap=0 00:12:41.224 numjobs=1 00:12:41.224 00:12:41.224 verify_dump=1 00:12:41.224 verify_backlog=512 00:12:41.224 verify_state_save=0 00:12:41.224 do_verify=1 00:12:41.224 verify=crc32c-intel 00:12:41.224 [job0] 00:12:41.224 filename=/dev/nvme0n1 00:12:41.224 [job1] 00:12:41.224 filename=/dev/nvme0n2 00:12:41.224 [job2] 00:12:41.224 filename=/dev/nvme0n3 00:12:41.224 [job3] 00:12:41.224 filename=/dev/nvme0n4 00:12:41.224 Could not set queue depth (nvme0n1) 00:12:41.224 Could not set queue depth (nvme0n2) 00:12:41.224 Could not set queue depth (nvme0n3) 00:12:41.224 Could not set queue depth (nvme0n4) 00:12:41.491 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.491 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.491 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.491 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.491 fio-3.35 00:12:41.491 Starting 4 threads 00:12:42.886 00:12:42.886 job0: (groupid=0, jobs=1): err= 0: pid=4177763: Thu Jul 25 07:18:49 2024 00:12:42.886 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:12:42.886 slat (nsec): min=1010, max=20104k, avg=113870.54, stdev=834149.24 00:12:42.886 clat (usec): min=7866, max=46569, avg=13762.87, stdev=5773.75 00:12:42.886 lat (usec): min=7881, max=46577, avg=13876.74, stdev=5862.50 00:12:42.886 clat percentiles (usec): 00:12:42.886 | 1.00th=[ 8094], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10028], 00:12:42.886 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11994], 60.00th=[12780], 00:12:42.886 | 70.00th=[14484], 80.00th=[15795], 90.00th=[19792], 95.00th=[26608], 00:12:42.886 | 99.00th=[39060], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:12:42.886 | 99.99th=[46400] 00:12:42.886 write: IOPS=4166, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1003msec); 0 zone resets 00:12:42.886 slat (nsec): min=1599, max=9301.0k, avg=121276.72, stdev=629137.17 00:12:42.886 clat (usec): min=1168, max=56858, avg=16957.36, stdev=11154.79 00:12:42.886 lat (usec): min=1177, max=56866, avg=17078.64, stdev=11216.47 00:12:42.886 clat percentiles (usec): 00:12:42.886 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7635], 00:12:42.886 | 30.00th=[ 8848], 40.00th=[10683], 50.00th=[12518], 60.00th=[14484], 00:12:42.886 | 70.00th=[22676], 80.00th=[27395], 90.00th=[33817], 95.00th=[39060], 00:12:42.886 | 99.00th=[48497], 99.50th=[51119], 99.90th=[55837], 99.95th=[56886], 00:12:42.886 | 99.99th=[56886] 00:12:42.886 bw ( KiB/s): min=15600, max=17200, per=25.09%, avg=16400.00, stdev=1131.37, samples=2 00:12:42.886 iops : min= 3900, max= 4300, avg=4100.00, stdev=282.84, samples=2 00:12:42.886 lat (msec) : 2=0.02%, 4=0.08%, 10=27.40%, 20=51.21%, 50=20.80% 00:12:42.886 lat (msec) : 100=0.48% 00:12:42.886 cpu : usr=3.69%, sys=4.69%, ctx=352, majf=0, minf=1 00:12:42.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:42.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.886 issued rwts: total=4096,4179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.886 job1: (groupid=0, jobs=1): err= 0: pid=4177770: Thu Jul 25 07:18:49 2024 00:12:42.886 read: IOPS=7462, BW=29.1MiB/s (30.6MB/s)(29.2MiB/1002msec) 00:12:42.886 slat (nsec): min=852, max=43522k, avg=61071.47, stdev=592610.16 00:12:42.886 clat (usec): min=1399, max=51844, avg=7919.90, stdev=5903.75 00:12:42.886 lat (usec): min=1968, max=51858, avg=7980.97, stdev=5934.10 00:12:42.886 clat percentiles (usec): 00:12:42.886 | 1.00th=[ 3130], 5.00th=[ 4293], 10.00th=[ 4883], 20.00th=[ 5538], 00:12:42.886 | 30.00th=[ 5932], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7635], 00:12:42.886 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[11863], 00:12:42.886 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[51119], 00:12:42.886 | 99.99th=[51643] 00:12:42.886 write: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec); 0 zone resets 00:12:42.886 slat (nsec): min=1431, max=9816.4k, avg=62709.53, stdev=323095.82 00:12:42.886 clat (usec): min=1060, max=40218, avg=8846.13, stdev=5694.37 00:12:42.886 lat (usec): min=1071, max=40227, avg=8908.84, stdev=5719.97 00:12:42.886 clat percentiles (usec): 00:12:42.886 | 1.00th=[ 2008], 5.00th=[ 3523], 10.00th=[ 3949], 20.00th=[ 5407], 00:12:42.886 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 8029], 00:12:42.886 | 70.00th=[ 8717], 80.00th=[10159], 90.00th=[16450], 95.00th=[21890], 00:12:42.886 | 99.00th=[30540], 99.50th=[36963], 99.90th=[39584], 99.95th=[40109], 00:12:42.886 | 99.99th=[40109] 00:12:42.886 bw ( KiB/s): min=30272, max=31168, per=46.99%, avg=30720.00, stdev=633.57, samples=2 00:12:42.886 iops : min= 7568, max= 7792, avg=7680.00, stdev=158.39, samples=2 00:12:42.886 lat (msec) : 2=0.57%, 4=6.12%, 10=78.71%, 20=10.48%, 50=3.93% 00:12:42.886 lat (msec) : 100=0.19% 00:12:42.886 cpu : usr=4.40%, sys=5.49%, ctx=876, majf=0, minf=1 00:12:42.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:42.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.886 issued rwts: total=7477,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.886 job2: (groupid=0, jobs=1): err= 0: pid=4177784: Thu Jul 25 07:18:49 2024 00:12:42.886 read: IOPS=1845, BW=7384KiB/s (7561kB/s)(7428KiB/1006msec) 00:12:42.886 slat (nsec): min=902, max=23821k, avg=322459.19, stdev=1926167.33 00:12:42.886 clat (usec): min=2898, max=75903, avg=40911.60, stdev=16810.25 00:12:42.886 lat (usec): min=14104, max=75909, avg=41234.06, stdev=16810.30 00:12:42.886 clat percentiles (usec): 00:12:42.886 | 1.00th=[16450], 5.00th=[21890], 10.00th=[22676], 20.00th=[25035], 00:12:42.886 | 30.00th=[26346], 40.00th=[28705], 50.00th=[38011], 60.00th=[44827], 00:12:42.886 | 70.00th=[53216], 80.00th=[59507], 90.00th=[66323], 95.00th=[68682], 00:12:42.886 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:12:42.886 | 99.99th=[76022] 00:12:42.886 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:12:42.886 slat (nsec): min=1532, max=20789k, avg=191793.82, stdev=1048459.20 00:12:42.886 clat (usec): min=4766, max=49598, avg=24824.94, stdev=10375.72 00:12:42.886 lat (usec): min=4775, max=49603, avg=25016.74, stdev=10410.75 00:12:42.886 clat percentiles (usec): 00:12:42.886 | 1.00th=[ 6980], 5.00th=[ 9110], 10.00th=[11600], 20.00th=[17171], 00:12:42.886 | 30.00th=[19530], 40.00th=[20317], 50.00th=[21103], 60.00th=[24773], 00:12:42.886 | 70.00th=[30278], 80.00th=[35390], 90.00th=[40633], 95.00th=[43779], 00:12:42.886 | 99.00th=[46924], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:12:42.886 | 99.99th=[49546] 00:12:42.886 bw ( KiB/s): min= 8192, max= 8192, per=12.53%, avg=8192.00, stdev= 0.00, samples=2 00:12:42.886 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:12:42.886 lat (msec) : 4=0.03%, 10=4.25%, 20=14.19%, 50=65.51%, 100=16.03% 00:12:42.886 cpu : usr=1.59%, sys=1.69%, ctx=210, majf=0, minf=1 00:12:42.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:42.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.886 issued rwts: total=1857,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.886 job3: (groupid=0, jobs=1): err= 0: pid=4177791: Thu Jul 25 07:18:49 2024 00:12:42.886 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:12:42.886 slat (nsec): min=921, max=14241k, avg=192638.92, stdev=1188360.93 00:12:42.887 clat (usec): min=11466, max=54509, avg=24850.47, stdev=8878.58 00:12:42.887 lat (usec): min=11471, max=63158, avg=25043.11, stdev=8997.29 00:12:42.887 clat percentiles (usec): 00:12:42.887 | 1.00th=[13698], 5.00th=[15795], 10.00th=[16319], 20.00th=[16712], 00:12:42.887 | 30.00th=[18744], 40.00th=[20579], 50.00th=[21365], 60.00th=[24249], 00:12:42.887 | 70.00th=[26870], 80.00th=[32375], 90.00th=[39060], 95.00th=[43254], 00:12:42.887 | 99.00th=[46924], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:12:42.887 | 99.99th=[54264] 00:12:42.887 write: IOPS=2533, BW=9.90MiB/s (10.4MB/s)(9.96MiB/1007msec); 0 zone resets 00:12:42.887 slat (nsec): min=1646, max=10427k, avg=232290.27, stdev=941894.22 00:12:42.887 clat (usec): min=4075, max=76451, avg=29848.55, stdev=15988.01 00:12:42.887 lat (usec): min=6583, max=76460, avg=30080.84, stdev=16066.15 00:12:42.887 clat percentiles (usec): 00:12:42.887 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[13566], 20.00th=[16909], 00:12:42.887 | 30.00th=[20579], 40.00th=[22676], 50.00th=[24249], 60.00th=[27657], 00:12:42.887 | 70.00th=[32900], 80.00th=[44303], 90.00th=[56361], 95.00th=[64750], 00:12:42.887 | 99.00th=[71828], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:12:42.887 | 99.99th=[76022] 00:12:42.887 bw ( KiB/s): min= 8576, max=10808, per=14.83%, avg=9692.00, stdev=1578.26, samples=2 00:12:42.887 iops : min= 2144, max= 2702, avg=2423.00, stdev=394.57, samples=2 00:12:42.887 lat (msec) : 10=1.39%, 20=29.33%, 50=60.97%, 100=8.31% 00:12:42.887 cpu : usr=1.69%, sys=2.98%, ctx=327, majf=0, minf=1 00:12:42.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:42.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.887 issued rwts: total=2048,2551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.887 00:12:42.887 Run status group 0 (all jobs): 00:12:42.887 READ: bw=60.0MiB/s (63.0MB/s), 7384KiB/s-29.1MiB/s (7561kB/s-30.6MB/s), io=60.5MiB (63.4MB), run=1002-1007msec 00:12:42.887 WRITE: bw=63.8MiB/s (66.9MB/s), 8143KiB/s-29.9MiB/s (8339kB/s-31.4MB/s), io=64.3MiB (67.4MB), run=1002-1007msec 00:12:42.887 00:12:42.887 Disk stats (read/write): 00:12:42.887 nvme0n1: ios=3126/3584, merge=0/0, ticks=43727/59066, in_queue=102793, util=99.40% 00:12:42.887 nvme0n2: ios=6193/6429, merge=0/0, ticks=29206/40406, in_queue=69612, util=90.52% 00:12:42.887 nvme0n3: ios=1589/1728, merge=0/0, ticks=16550/10468, in_queue=27018, util=92.10% 00:12:42.887 nvme0n4: ios=1955/2048, merge=0/0, ticks=22427/29069, in_queue=51496, util=98.72% 00:12:42.887 07:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:42.887 07:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4178079 00:12:42.887 07:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:42.887 07:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:42.887 [global] 00:12:42.887 thread=1 00:12:42.887 invalidate=1 00:12:42.887 rw=read 00:12:42.887 time_based=1 00:12:42.887 runtime=10 00:12:42.887 ioengine=libaio 00:12:42.887 direct=1 00:12:42.887 bs=4096 00:12:42.887 iodepth=1 00:12:42.887 norandommap=1 00:12:42.887 numjobs=1 00:12:42.887 00:12:42.887 [job0] 00:12:42.887 filename=/dev/nvme0n1 00:12:42.887 [job1] 00:12:42.887 filename=/dev/nvme0n2 00:12:42.887 [job2] 00:12:42.887 filename=/dev/nvme0n3 00:12:42.887 [job3] 00:12:42.887 filename=/dev/nvme0n4 00:12:42.887 Could not set queue depth (nvme0n1) 00:12:42.887 Could not set queue depth (nvme0n2) 00:12:42.887 Could not set queue depth (nvme0n3) 00:12:42.887 Could not set queue depth (nvme0n4) 00:12:43.154 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.154 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.154 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.154 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:43.154 fio-3.35 00:12:43.154 Starting 4 threads 00:12:45.749 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:46.010 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2203648, buflen=4096 00:12:46.010 fio: pid=4178334, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:46.010 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:46.010 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=7200768, buflen=4096 00:12:46.010 fio: pid=4178320, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:46.010 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:46.010 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:46.269 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6246400, buflen=4096 00:12:46.269 fio: pid=4178291, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:46.269 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:46.269 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:46.529 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=9555968, buflen=4096 00:12:46.529 fio: pid=4178296, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:46.529 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:46.529 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:46.529 00:12:46.529 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4178291: Thu Jul 25 07:18:53 2024 00:12:46.529 read: IOPS=533, BW=2132KiB/s (2183kB/s)(6100KiB/2861msec) 00:12:46.529 slat (usec): min=3, max=25411, avg=66.71, stdev=882.71 00:12:46.529 clat (usec): min=866, max=42089, avg=1785.08, stdev=3866.41 00:12:46.529 lat (usec): min=890, max=42093, avg=1851.82, stdev=3960.54 00:12:46.529 clat percentiles (usec): 00:12:46.529 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1287], 20.00th=[ 1352], 00:12:46.529 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[ 1434], 60.00th=[ 1450], 00:12:46.529 | 70.00th=[ 1467], 80.00th=[ 1483], 90.00th=[ 1500], 95.00th=[ 1532], 00:12:46.529 | 99.00th=[ 1680], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:46.529 | 99.99th=[42206] 00:12:46.529 bw ( KiB/s): min= 128, max= 2752, per=26.34%, avg=2137.60, stdev=1129.10, samples=5 00:12:46.529 iops : min= 32, max= 688, avg=534.40, stdev=282.27, samples=5 00:12:46.529 lat (usec) : 1000=0.13% 00:12:46.529 lat (msec) : 2=98.89%, 50=0.92% 00:12:46.529 cpu : usr=0.45%, sys=1.61%, ctx=1532, majf=0, minf=1 00:12:46.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:46.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.529 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.529 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:46.529 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4178296: Thu Jul 25 07:18:53 2024 00:12:46.529 read: IOPS=769, BW=3076KiB/s (3150kB/s)(9332KiB/3034msec) 00:12:46.529 slat (usec): min=3, max=29573, avg=59.37, stdev=913.20 00:12:46.529 clat (usec): min=669, max=3602, avg=1225.51, stdev=129.19 00:12:46.529 lat (usec): min=674, max=30753, avg=1284.90, stdev=923.01 00:12:46.529 clat percentiles (usec): 00:12:46.529 | 1.00th=[ 930], 5.00th=[ 1029], 10.00th=[ 1057], 20.00th=[ 1106], 00:12:46.529 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1270], 00:12:46.529 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1352], 95.00th=[ 1385], 00:12:46.529 | 99.00th=[ 1434], 99.50th=[ 1467], 99.90th=[ 1876], 99.95th=[ 2638], 00:12:46.529 | 99.99th=[ 3589] 00:12:46.529 bw ( KiB/s): min= 2684, max= 3400, per=38.11%, avg=3092.67, stdev=240.80, samples=6 00:12:46.529 iops : min= 671, max= 850, avg=773.17, stdev=60.20, samples=6 00:12:46.529 lat (usec) : 750=0.09%, 1000=2.83% 00:12:46.529 lat (msec) : 2=96.96%, 4=0.09% 00:12:46.529 cpu : usr=0.56%, sys=1.48%, ctx=2347, majf=0, minf=1 00:12:46.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:46.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.529 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.529 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:46.529 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4178320: Thu Jul 25 07:18:53 2024 00:12:46.529 read: IOPS=656, BW=2623KiB/s (2686kB/s)(7032KiB/2681msec) 00:12:46.529 slat (usec): min=8, max=20308, avg=46.40, stdev=596.27 00:12:46.529 clat (usec): min=895, max=41999, avg=1460.24, stdev=1674.53 00:12:46.529 lat (usec): min=921, max=42024, avg=1506.66, stdev=1775.84 00:12:46.529 clat percentiles (usec): 00:12:46.529 | 1.00th=[ 1106], 5.00th=[ 1237], 10.00th=[ 1287], 20.00th=[ 1336], 00:12:46.529 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[ 1401], 60.00th=[ 1418], 00:12:46.529 | 70.00th=[ 1434], 80.00th=[ 1450], 90.00th=[ 1483], 95.00th=[ 1500], 00:12:46.529 | 99.00th=[ 1549], 99.50th=[ 1565], 99.90th=[41681], 99.95th=[42206], 00:12:46.529 | 99.99th=[42206] 00:12:46.529 bw ( KiB/s): min= 2120, max= 2840, per=32.75%, avg=2657.60, stdev=301.80, samples=5 00:12:46.529 iops : min= 530, max= 710, avg=664.40, stdev=75.45, samples=5 00:12:46.529 lat (usec) : 1000=0.17% 00:12:46.529 lat (msec) : 2=99.60%, 50=0.17% 00:12:46.529 cpu : usr=0.82%, sys=2.95%, ctx=1763, majf=0, minf=1 00:12:46.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:46.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.529 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.529 issued rwts: total=1759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:46.530 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4178334: Thu Jul 25 07:18:53 2024 00:12:46.530 read: IOPS=214, BW=856KiB/s (877kB/s)(2152KiB/2513msec) 00:12:46.530 slat (nsec): min=23691, max=43006, avg=24702.36, stdev=2063.71 00:12:46.530 clat (usec): min=747, max=42153, avg=4592.13, stdev=11574.18 00:12:46.530 lat (usec): min=771, max=42189, avg=4616.83, stdev=11574.73 00:12:46.530 clat percentiles (usec): 00:12:46.530 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 963], 00:12:46.530 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:12:46.530 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1205], 95.00th=[42206], 00:12:46.530 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:46.530 | 99.99th=[42206] 00:12:46.530 bw ( KiB/s): min= 96, max= 3712, per=10.60%, avg=860.80, stdev=1596.41, samples=5 00:12:46.530 iops : min= 24, max= 928, avg=215.20, stdev=399.10, samples=5 00:12:46.530 lat (usec) : 750=0.19%, 1000=32.10% 00:12:46.530 lat (msec) : 2=58.81%, 50=8.72% 00:12:46.530 cpu : usr=0.08%, sys=0.80%, ctx=540, majf=0, minf=2 00:12:46.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:46.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.530 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.530 issued rwts: total=539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:46.530 00:12:46.530 Run status group 0 (all jobs): 00:12:46.530 READ: bw=8113KiB/s (8308kB/s), 856KiB/s-3076KiB/s (877kB/s-3150kB/s), io=24.0MiB (25.2MB), run=2513-3034msec 00:12:46.530 00:12:46.530 Disk stats (read/write): 00:12:46.530 nvme0n1: ios=1483/0, merge=0/0, ticks=2599/0, in_queue=2599, util=90.55% 00:12:46.530 nvme0n2: ios=2334/0, merge=0/0, ticks=2824/0, in_queue=2824, util=91.00% 00:12:46.530 nvme0n3: ios=1705/0, merge=0/0, ticks=2386/0, in_queue=2386, util=96.18% 00:12:46.530 nvme0n4: ios=332/0, merge=0/0, ticks=2257/0, in_queue=2257, util=95.95% 00:12:46.530 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:46.530 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:46.790 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:46.790 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:47.050 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:47.050 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:47.050 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:47.050 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 4178079 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:47.311 nvmf hotplug test: fio failed as expected 00:12:47.311 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.573 rmmod nvme_tcp 00:12:47.573 rmmod nvme_fabrics 00:12:47.573 rmmod nvme_keyring 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4174570 ']' 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4174570 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 4174570 ']' 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 4174570 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4174570 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4174570' 00:12:47.573 killing process with pid 4174570 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 4174570 00:12:47.573 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 4174570 00:12:47.833 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.833 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.834 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.834 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.834 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.834 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.834 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.834 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.749 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:50.011 00:12:50.011 real 0m28.425s 00:12:50.011 user 2m36.985s 00:12:50.011 sys 0m9.108s 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.011 ************************************ 00:12:50.011 END TEST nvmf_fio_target 00:12:50.011 ************************************ 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:50.011 ************************************ 00:12:50.011 START TEST nvmf_bdevio 00:12:50.011 ************************************ 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:50.011 * Looking for test storage... 00:12:50.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.011 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:12:50.012 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.158 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:58.159 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:58.159 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:58.159 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:58.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:58.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.762 ms 00:12:58.159 00:12:58.159 --- 10.0.0.2 ping statistics --- 00:12:58.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.159 rtt min/avg/max/mdev = 0.762/0.762/0.762/0.000 ms 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:12:58.159 00:12:58.159 --- 10.0.0.1 ping statistics --- 00:12:58.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.159 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:58.159 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4183382 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4183382 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 4183382 ']' 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.160 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.160 [2024-07-25 07:19:04.651304] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:12:58.160 [2024-07-25 07:19:04.651372] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.160 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.160 [2024-07-25 07:19:04.738989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.160 [2024-07-25 07:19:04.835108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.160 [2024-07-25 07:19:04.835168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.160 [2024-07-25 07:19:04.835176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.160 [2024-07-25 07:19:04.835183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.160 [2024-07-25 07:19:04.835189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.160 [2024-07-25 07:19:04.835310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:12:58.160 [2024-07-25 07:19:04.835480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:12:58.160 [2024-07-25 07:19:04.835640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.160 [2024-07-25 07:19:04.835640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.160 [2024-07-25 07:19:05.507944] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.160 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.421 Malloc0 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:58.421 [2024-07-25 07:19:05.573516] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:58.421 { 00:12:58.421 "params": { 00:12:58.421 "name": "Nvme$subsystem", 00:12:58.421 "trtype": "$TEST_TRANSPORT", 00:12:58.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:58.421 "adrfam": "ipv4", 00:12:58.421 "trsvcid": "$NVMF_PORT", 00:12:58.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:58.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:58.421 "hdgst": ${hdgst:-false}, 00:12:58.421 "ddgst": ${ddgst:-false} 00:12:58.421 }, 00:12:58.421 "method": "bdev_nvme_attach_controller" 00:12:58.421 } 00:12:58.421 EOF 00:12:58.421 )") 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:58.421 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:58.421 "params": { 00:12:58.421 "name": "Nvme1", 00:12:58.421 "trtype": "tcp", 00:12:58.422 "traddr": "10.0.0.2", 00:12:58.422 "adrfam": "ipv4", 00:12:58.422 "trsvcid": "4420", 00:12:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:58.422 "hdgst": false, 00:12:58.422 "ddgst": false 00:12:58.422 }, 00:12:58.422 "method": "bdev_nvme_attach_controller" 00:12:58.422 }' 00:12:58.422 [2024-07-25 07:19:05.631540] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:12:58.422 [2024-07-25 07:19:05.631611] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4183647 ] 00:12:58.422 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.422 [2024-07-25 07:19:05.697960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.422 [2024-07-25 07:19:05.773034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.422 [2024-07-25 07:19:05.773157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.422 [2024-07-25 07:19:05.773160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.682 I/O targets: 00:12:58.682 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:58.682 00:12:58.682 00:12:58.682 CUnit - A unit testing framework for C - Version 2.1-3 00:12:58.682 http://cunit.sourceforge.net/ 00:12:58.682 00:12:58.682 00:12:58.682 Suite: bdevio tests on: Nvme1n1 00:12:58.682 Test: blockdev write read block ...passed 00:12:58.682 Test: blockdev write zeroes read block ...passed 00:12:58.682 Test: blockdev write zeroes read no split ...passed 00:12:58.682 Test: blockdev write zeroes read split ...passed 00:12:58.943 Test: blockdev write zeroes read split partial ...passed 00:12:58.943 Test: blockdev reset ...[2024-07-25 07:19:06.106706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:58.943 [2024-07-25 07:19:06.106775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1490d90 (9): Bad file descriptor 00:12:58.944 [2024-07-25 07:19:06.163738] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:58.944 passed 00:12:58.944 Test: blockdev write read 8 blocks ...passed 00:12:58.944 Test: blockdev write read size > 128k ...passed 00:12:58.944 Test: blockdev write read invalid size ...passed 00:12:58.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:58.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:58.944 Test: blockdev write read max offset ...passed 00:12:59.203 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:59.203 Test: blockdev writev readv 8 blocks ...passed 00:12:59.203 Test: blockdev writev readv 30 x 1block ...passed 00:12:59.203 Test: blockdev writev readv block ...passed 00:12:59.203 Test: blockdev writev readv size > 128k ...passed 00:12:59.203 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:59.203 Test: blockdev comparev and writev ...[2024-07-25 07:19:06.438182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.438211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.438222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.438228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.438861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.438870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.438880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.438885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.439505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.439514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.439523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.439532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.440124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.440132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.440141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:59.203 [2024-07-25 07:19:06.440146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:59.203 passed 00:12:59.203 Test: blockdev nvme passthru rw ...passed 00:12:59.203 Test: blockdev nvme passthru vendor specific ...[2024-07-25 07:19:06.525250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:59.203 [2024-07-25 07:19:06.525262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.525723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:59.203 [2024-07-25 07:19:06.525730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.526175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:59.203 [2024-07-25 07:19:06.526182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:59.203 [2024-07-25 07:19:06.526672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:59.203 [2024-07-25 07:19:06.526680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:59.203 passed 00:12:59.203 Test: blockdev nvme admin passthru ...passed 00:12:59.464 Test: blockdev copy ...passed 00:12:59.464 00:12:59.464 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.464 suites 1 1 n/a 0 0 00:12:59.464 tests 23 23 23 0 0 00:12:59.464 asserts 152 152 152 0 n/a 00:12:59.464 00:12:59.464 Elapsed time = 1.337 seconds 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.464 rmmod nvme_tcp 00:12:59.464 rmmod nvme_fabrics 00:12:59.464 rmmod nvme_keyring 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4183382 ']' 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4183382 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 4183382 ']' 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 4183382 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.464 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4183382 00:12:59.725 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:59.725 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:59.725 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4183382' 00:12:59.725 killing process with pid 4183382 00:12:59.725 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 4183382 00:12:59.725 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 4183382 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.725 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.275 00:13:02.275 real 0m11.896s 00:13:02.275 user 0m12.796s 00:13:02.275 sys 0m6.017s 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:02.275 ************************************ 00:13:02.275 END TEST nvmf_bdevio 00:13:02.275 ************************************ 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:02.275 00:13:02.275 real 4m54.627s 00:13:02.275 user 11m41.284s 00:13:02.275 sys 1m43.854s 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:02.275 ************************************ 00:13:02.275 END TEST nvmf_target_core 00:13:02.275 ************************************ 00:13:02.275 07:19:09 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:02.275 07:19:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:02.275 07:19:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.275 07:19:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.275 ************************************ 00:13:02.275 START TEST nvmf_target_extra 00:13:02.275 ************************************ 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:02.275 * Looking for test storage... 00:13:02.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.275 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.276 ************************************ 00:13:02.276 START TEST nvmf_example 00:13:02.276 ************************************ 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:02.276 * Looking for test storage... 00:13:02.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:02.276 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.277 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:10.421 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:10.421 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:10.421 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:10.421 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.421 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:13:10.422 00:13:10.422 --- 10.0.0.2 ping statistics --- 00:13:10.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.422 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:13:10.422 00:13:10.422 --- 10.0.0.1 ping statistics --- 00:13:10.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.422 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4188044 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4188044 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 4188044 ']' 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.422 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:10.422 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:10.422 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.691 Initializing NVMe Controllers 00:13:22.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:22.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:22.691 Initialization complete. Launching workers. 00:13:22.691 ======================================================== 00:13:22.691 Latency(us) 00:13:22.691 Device Information : IOPS MiB/s Average min max 00:13:22.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14873.00 58.10 4303.44 892.60 15873.43 00:13:22.691 ======================================================== 00:13:22.691 Total : 14873.00 58.10 4303.44 892.60 15873.43 00:13:22.691 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.691 rmmod nvme_tcp 00:13:22.691 rmmod nvme_fabrics 00:13:22.691 rmmod nvme_keyring 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4188044 ']' 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4188044 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 4188044 ']' 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 4188044 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.691 07:19:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4188044 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4188044' 00:13:22.691 killing process with pid 4188044 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 4188044 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 4188044 00:13:22.691 nvmf threads initialize successfully 00:13:22.691 bdev subsystem init successfully 00:13:22.691 created a nvmf target service 00:13:22.691 create targets's poll groups done 00:13:22.691 all subsystems of target started 00:13:22.691 nvmf target is running 00:13:22.691 all subsystems of target stopped 00:13:22.691 destroy targets's poll groups done 00:13:22.691 destroyed the nvmf target service 00:13:22.691 bdev subsystem finish successfully 00:13:22.691 nvmf threads destroy successfully 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.691 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.692 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.692 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.692 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.953 00:13:22.953 real 0m20.894s 00:13:22.953 user 0m46.611s 00:13:22.953 sys 0m6.326s 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:22.953 ************************************ 00:13:22.953 END TEST nvmf_example 00:13:22.953 ************************************ 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.953 07:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.217 ************************************ 00:13:23.217 START TEST nvmf_filesystem 00:13:23.217 ************************************ 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:23.217 * Looking for test storage... 00:13:23.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:23.217 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:23.218 #define SPDK_CONFIG_H 00:13:23.218 #define SPDK_CONFIG_APPS 1 00:13:23.218 #define SPDK_CONFIG_ARCH native 00:13:23.218 #undef SPDK_CONFIG_ASAN 00:13:23.218 #undef SPDK_CONFIG_AVAHI 00:13:23.218 #undef SPDK_CONFIG_CET 00:13:23.218 #define SPDK_CONFIG_COVERAGE 1 00:13:23.218 #define SPDK_CONFIG_CROSS_PREFIX 00:13:23.218 #undef SPDK_CONFIG_CRYPTO 00:13:23.218 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:23.218 #undef SPDK_CONFIG_CUSTOMOCF 00:13:23.218 #undef SPDK_CONFIG_DAOS 00:13:23.218 #define SPDK_CONFIG_DAOS_DIR 00:13:23.218 #define SPDK_CONFIG_DEBUG 1 00:13:23.218 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:23.218 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:23.218 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:23.218 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:23.218 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:23.218 #undef SPDK_CONFIG_DPDK_UADK 00:13:23.218 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:23.218 #define SPDK_CONFIG_EXAMPLES 1 00:13:23.218 #undef SPDK_CONFIG_FC 00:13:23.218 #define SPDK_CONFIG_FC_PATH 00:13:23.218 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:23.218 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:23.218 #undef SPDK_CONFIG_FUSE 00:13:23.218 #undef SPDK_CONFIG_FUZZER 00:13:23.218 #define SPDK_CONFIG_FUZZER_LIB 00:13:23.218 #undef SPDK_CONFIG_GOLANG 00:13:23.218 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:23.218 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:23.218 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:23.218 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:23.218 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:23.218 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:23.218 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:23.218 #define SPDK_CONFIG_IDXD 1 00:13:23.218 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:23.218 #undef SPDK_CONFIG_IPSEC_MB 00:13:23.218 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:23.218 #define SPDK_CONFIG_ISAL 1 00:13:23.218 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:23.218 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:23.218 #define SPDK_CONFIG_LIBDIR 00:13:23.218 #undef SPDK_CONFIG_LTO 00:13:23.218 #define SPDK_CONFIG_MAX_LCORES 128 00:13:23.218 #define SPDK_CONFIG_NVME_CUSE 1 00:13:23.218 #undef SPDK_CONFIG_OCF 00:13:23.218 #define SPDK_CONFIG_OCF_PATH 00:13:23.218 #define SPDK_CONFIG_OPENSSL_PATH 00:13:23.218 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:23.218 #define SPDK_CONFIG_PGO_DIR 00:13:23.218 #undef SPDK_CONFIG_PGO_USE 00:13:23.218 #define SPDK_CONFIG_PREFIX /usr/local 00:13:23.218 #undef SPDK_CONFIG_RAID5F 00:13:23.218 #undef SPDK_CONFIG_RBD 00:13:23.218 #define SPDK_CONFIG_RDMA 1 00:13:23.218 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:23.218 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:23.218 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:23.218 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:23.218 #define SPDK_CONFIG_SHARED 1 00:13:23.218 #undef SPDK_CONFIG_SMA 00:13:23.218 #define SPDK_CONFIG_TESTS 1 00:13:23.218 #undef SPDK_CONFIG_TSAN 00:13:23.218 #define SPDK_CONFIG_UBLK 1 00:13:23.218 #define SPDK_CONFIG_UBSAN 1 00:13:23.218 #undef SPDK_CONFIG_UNIT_TESTS 00:13:23.218 #undef SPDK_CONFIG_URING 00:13:23.218 #define SPDK_CONFIG_URING_PATH 00:13:23.218 #undef SPDK_CONFIG_URING_ZNS 00:13:23.218 #undef SPDK_CONFIG_USDT 00:13:23.218 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:23.218 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:23.218 #define SPDK_CONFIG_VFIO_USER 1 00:13:23.218 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:23.218 #define SPDK_CONFIG_VHOST 1 00:13:23.218 #define SPDK_CONFIG_VIRTIO 1 00:13:23.218 #undef SPDK_CONFIG_VTUNE 00:13:23.218 #define SPDK_CONFIG_VTUNE_DIR 00:13:23.218 #define SPDK_CONFIG_WERROR 1 00:13:23.218 #define SPDK_CONFIG_WPDK_DIR 00:13:23.218 #undef SPDK_CONFIG_XNVME 00:13:23.218 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:23.218 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:23.219 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:23.220 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j144 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 4190833 ]] 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 4190833 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.RjiuLK 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RjiuLK/tests/target /tmp/spdk.RjiuLK 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:13:23.221 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=954236928 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330192896 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=118601113600 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=129370976256 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10769862656 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64623304704 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685486080 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=25850851328 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=25874198528 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23347200 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=efivarfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=efivarfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=216064 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=507904 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=287744 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64683851776 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685490176 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1638400 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12937093120 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12937097216 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:13:23.484 * Looking for test storage... 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=118601113600 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12984455168 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:23.484 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.485 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.627 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:31.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:31.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:31.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:31.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:13:31.628 00:13:31.628 --- 10.0.0.2 ping statistics --- 00:13:31.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.628 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:13:31.628 00:13:31.628 --- 10.0.0.1 ping statistics --- 00:13:31.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.628 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:31.628 ************************************ 00:13:31.628 START TEST nvmf_filesystem_no_in_capsule 00:13:31.628 ************************************ 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:31.628 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1125 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1125 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1125 ']' 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.629 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 [2024-07-25 07:19:38.035238] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:13:31.629 [2024-07-25 07:19:38.035302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.629 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.629 [2024-07-25 07:19:38.106266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.629 [2024-07-25 07:19:38.180567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.629 [2024-07-25 07:19:38.180604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.629 [2024-07-25 07:19:38.180612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.629 [2024-07-25 07:19:38.180622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.629 [2024-07-25 07:19:38.180628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.629 [2024-07-25 07:19:38.180780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.629 [2024-07-25 07:19:38.180905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.629 [2024-07-25 07:19:38.181061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.629 [2024-07-25 07:19:38.181062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 [2024-07-25 07:19:38.849099] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 Malloc1 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.629 [2024-07-25 07:19:38.979306] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.629 07:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:31.889 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.889 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:31.889 { 00:13:31.889 "name": "Malloc1", 00:13:31.889 "aliases": [ 00:13:31.889 "3777f440-a592-4627-a286-7e6e83c92a34" 00:13:31.889 ], 00:13:31.889 "product_name": "Malloc disk", 00:13:31.890 "block_size": 512, 00:13:31.890 "num_blocks": 1048576, 00:13:31.890 "uuid": "3777f440-a592-4627-a286-7e6e83c92a34", 00:13:31.890 "assigned_rate_limits": { 00:13:31.890 "rw_ios_per_sec": 0, 00:13:31.890 "rw_mbytes_per_sec": 0, 00:13:31.890 "r_mbytes_per_sec": 0, 00:13:31.890 "w_mbytes_per_sec": 0 00:13:31.890 }, 00:13:31.890 "claimed": true, 00:13:31.890 "claim_type": "exclusive_write", 00:13:31.890 "zoned": false, 00:13:31.890 "supported_io_types": { 00:13:31.890 "read": true, 00:13:31.890 "write": true, 00:13:31.890 "unmap": true, 00:13:31.890 "flush": true, 00:13:31.890 "reset": true, 00:13:31.890 "nvme_admin": false, 00:13:31.890 "nvme_io": false, 00:13:31.890 "nvme_io_md": false, 00:13:31.890 "write_zeroes": true, 00:13:31.890 "zcopy": true, 00:13:31.890 "get_zone_info": false, 00:13:31.890 "zone_management": false, 00:13:31.890 "zone_append": false, 00:13:31.890 "compare": false, 00:13:31.890 "compare_and_write": false, 00:13:31.890 "abort": true, 00:13:31.890 "seek_hole": false, 00:13:31.890 "seek_data": false, 00:13:31.890 "copy": true, 00:13:31.890 "nvme_iov_md": false 00:13:31.890 }, 00:13:31.890 "memory_domains": [ 00:13:31.890 { 00:13:31.890 "dma_device_id": "system", 00:13:31.890 "dma_device_type": 1 00:13:31.890 }, 00:13:31.890 { 00:13:31.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.890 "dma_device_type": 2 00:13:31.890 } 00:13:31.890 ], 00:13:31.890 "driver_specific": {} 00:13:31.890 } 00:13:31.890 ]' 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:31.890 07:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.806 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.806 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:33.806 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.806 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:33.806 07:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:35.719 07:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:35.719 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:36.291 07:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.231 ************************************ 00:13:37.231 START TEST filesystem_ext4 00:13:37.231 ************************************ 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:37.231 07:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:37.231 mke2fs 1.46.5 (30-Dec-2021) 00:13:37.493 Discarding device blocks: 0/522240 done 00:13:37.493 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:37.493 Filesystem UUID: 35be67db-6a89-483b-a74e-07150ac1136c 00:13:37.493 Superblock backups stored on blocks: 00:13:37.493 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:37.493 00:13:37.493 Allocating group tables: 0/64 done 00:13:37.493 Writing inode tables: 0/64 done 00:13:37.754 Creating journal (8192 blocks): done 00:13:38.325 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:13:38.325 00:13:38.325 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:38.325 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1125 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:38.585 00:13:38.585 real 0m1.344s 00:13:38.585 user 0m0.028s 00:13:38.585 sys 0m0.070s 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.585 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:38.585 ************************************ 00:13:38.585 END TEST filesystem_ext4 00:13:38.585 ************************************ 00:13:38.845 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:38.845 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:38.845 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.845 07:19:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.845 ************************************ 00:13:38.845 START TEST filesystem_btrfs 00:13:38.845 ************************************ 00:13:38.845 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:38.845 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:38.845 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.845 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:38.845 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:38.845 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:38.845 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:38.846 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:38.846 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:38.846 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:38.846 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:39.107 btrfs-progs v6.6.2 00:13:39.107 See https://btrfs.readthedocs.io for more information. 00:13:39.107 00:13:39.107 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:39.107 NOTE: several default settings have changed in version 5.15, please make sure 00:13:39.107 this does not affect your deployments: 00:13:39.107 - DUP for metadata (-m dup) 00:13:39.107 - enabled no-holes (-O no-holes) 00:13:39.107 - enabled free-space-tree (-R free-space-tree) 00:13:39.107 00:13:39.107 Label: (null) 00:13:39.107 UUID: aecabcf4-35aa-449b-a404-e871700d9a46 00:13:39.107 Node size: 16384 00:13:39.107 Sector size: 4096 00:13:39.107 Filesystem size: 510.00MiB 00:13:39.107 Block group profiles: 00:13:39.107 Data: single 8.00MiB 00:13:39.107 Metadata: DUP 32.00MiB 00:13:39.107 System: DUP 8.00MiB 00:13:39.107 SSD detected: yes 00:13:39.107 Zoned device: no 00:13:39.107 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:39.107 Runtime features: free-space-tree 00:13:39.107 Checksum: crc32c 00:13:39.107 Number of devices: 1 00:13:39.107 Devices: 00:13:39.107 ID SIZE PATH 00:13:39.107 1 510.00MiB /dev/nvme0n1p1 00:13:39.107 00:13:39.107 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:39.107 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1125 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:39.681 00:13:39.681 real 0m0.932s 00:13:39.681 user 0m0.035s 00:13:39.681 sys 0m0.126s 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:39.681 ************************************ 00:13:39.681 END TEST filesystem_btrfs 00:13:39.681 ************************************ 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.681 07:19:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:39.681 ************************************ 00:13:39.681 START TEST filesystem_xfs 00:13:39.681 ************************************ 00:13:39.681 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:39.681 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:39.681 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:39.681 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:39.682 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:39.682 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:39.682 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:39.682 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:39.682 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:39.682 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:39.682 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:39.942 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:39.942 = sectsz=512 attr=2, projid32bit=1 00:13:39.942 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:39.942 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:39.942 data = bsize=4096 blocks=130560, imaxpct=25 00:13:39.942 = sunit=0 swidth=0 blks 00:13:39.942 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:39.942 log =internal log bsize=4096 blocks=16384, version=2 00:13:39.942 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:39.942 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:40.883 Discarding blocks...Done. 00:13:40.883 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:40.883 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1125 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:43.428 00:13:43.428 real 0m3.621s 00:13:43.428 user 0m0.027s 00:13:43.428 sys 0m0.078s 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:43.428 ************************************ 00:13:43.428 END TEST filesystem_xfs 00:13:43.428 ************************************ 00:13:43.428 07:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:43.688 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:43.688 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1125 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1125 ']' 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1125 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125' 00:13:43.949 killing process with pid 1125 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1125 00:13:43.949 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1125 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:44.209 00:13:44.209 real 0m13.503s 00:13:44.209 user 0m53.142s 00:13:44.209 sys 0m1.266s 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.209 ************************************ 00:13:44.209 END TEST nvmf_filesystem_no_in_capsule 00:13:44.209 ************************************ 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.209 ************************************ 00:13:44.209 START TEST nvmf_filesystem_in_capsule 00:13:44.209 ************************************ 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.209 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4111 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4111 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 4111 ']' 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.210 07:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.470 [2024-07-25 07:19:51.606779] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:13:44.470 [2024-07-25 07:19:51.606839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.470 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.470 [2024-07-25 07:19:51.677631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.470 [2024-07-25 07:19:51.749816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.470 [2024-07-25 07:19:51.749856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.470 [2024-07-25 07:19:51.749864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.470 [2024-07-25 07:19:51.749872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.470 [2024-07-25 07:19:51.749877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.470 [2024-07-25 07:19:51.750035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.470 [2024-07-25 07:19:51.750148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.470 [2024-07-25 07:19:51.750302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.470 [2024-07-25 07:19:51.750470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.042 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.042 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:45.042 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.042 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.042 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 [2024-07-25 07:19:52.434245] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 Malloc1 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.303 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.304 [2024-07-25 07:19:52.558295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:45.304 { 00:13:45.304 "name": "Malloc1", 00:13:45.304 "aliases": [ 00:13:45.304 "29392c2b-6875-4b84-82dc-2d9262204ee2" 00:13:45.304 ], 00:13:45.304 "product_name": "Malloc disk", 00:13:45.304 "block_size": 512, 00:13:45.304 "num_blocks": 1048576, 00:13:45.304 "uuid": "29392c2b-6875-4b84-82dc-2d9262204ee2", 00:13:45.304 "assigned_rate_limits": { 00:13:45.304 "rw_ios_per_sec": 0, 00:13:45.304 "rw_mbytes_per_sec": 0, 00:13:45.304 "r_mbytes_per_sec": 0, 00:13:45.304 "w_mbytes_per_sec": 0 00:13:45.304 }, 00:13:45.304 "claimed": true, 00:13:45.304 "claim_type": "exclusive_write", 00:13:45.304 "zoned": false, 00:13:45.304 "supported_io_types": { 00:13:45.304 "read": true, 00:13:45.304 "write": true, 00:13:45.304 "unmap": true, 00:13:45.304 "flush": true, 00:13:45.304 "reset": true, 00:13:45.304 "nvme_admin": false, 00:13:45.304 "nvme_io": false, 00:13:45.304 "nvme_io_md": false, 00:13:45.304 "write_zeroes": true, 00:13:45.304 "zcopy": true, 00:13:45.304 "get_zone_info": false, 00:13:45.304 "zone_management": false, 00:13:45.304 "zone_append": false, 00:13:45.304 "compare": false, 00:13:45.304 "compare_and_write": false, 00:13:45.304 "abort": true, 00:13:45.304 "seek_hole": false, 00:13:45.304 "seek_data": false, 00:13:45.304 "copy": true, 00:13:45.304 "nvme_iov_md": false 00:13:45.304 }, 00:13:45.304 "memory_domains": [ 00:13:45.304 { 00:13:45.304 "dma_device_id": "system", 00:13:45.304 "dma_device_type": 1 00:13:45.304 }, 00:13:45.304 { 00:13:45.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.304 "dma_device_type": 2 00:13:45.304 } 00:13:45.304 ], 00:13:45.304 "driver_specific": {} 00:13:45.304 } 00:13:45.304 ]' 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:45.304 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:45.564 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:45.564 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:45.564 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:45.564 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:45.564 07:19:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:47.000 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.000 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:47.000 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.000 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:47.000 07:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:48.912 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:49.173 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:49.743 07:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.687 ************************************ 00:13:50.687 START TEST filesystem_in_capsule_ext4 00:13:50.687 ************************************ 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:50.687 07:19:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:50.687 mke2fs 1.46.5 (30-Dec-2021) 00:13:50.687 Discarding device blocks: 0/522240 done 00:13:50.948 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:50.948 Filesystem UUID: ffe7410b-1297-4bb7-b859-99d894bb071d 00:13:50.948 Superblock backups stored on blocks: 00:13:50.948 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:50.948 00:13:50.948 Allocating group tables: 0/64 done 00:13:50.948 Writing inode tables: 0/64 done 00:13:50.948 Creating journal (8192 blocks): done 00:13:50.948 Writing superblocks and filesystem accounting information: 0/64 done 00:13:50.948 00:13:50.948 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:50.948 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4111 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:51.535 00:13:51.535 real 0m0.791s 00:13:51.535 user 0m0.030s 00:13:51.535 sys 0m0.068s 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:51.535 ************************************ 00:13:51.535 END TEST filesystem_in_capsule_ext4 00:13:51.535 ************************************ 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:51.535 ************************************ 00:13:51.535 START TEST filesystem_in_capsule_btrfs 00:13:51.535 ************************************ 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:51.535 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:51.536 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:51.536 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:51.536 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:51.536 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:51.536 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:51.796 btrfs-progs v6.6.2 00:13:51.796 See https://btrfs.readthedocs.io for more information. 00:13:51.796 00:13:51.796 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:51.796 NOTE: several default settings have changed in version 5.15, please make sure 00:13:51.796 this does not affect your deployments: 00:13:51.796 - DUP for metadata (-m dup) 00:13:51.796 - enabled no-holes (-O no-holes) 00:13:51.796 - enabled free-space-tree (-R free-space-tree) 00:13:51.796 00:13:51.796 Label: (null) 00:13:51.796 UUID: 450a463e-f4b7-4dba-9115-d502a184fac5 00:13:51.796 Node size: 16384 00:13:51.796 Sector size: 4096 00:13:51.796 Filesystem size: 510.00MiB 00:13:51.796 Block group profiles: 00:13:51.796 Data: single 8.00MiB 00:13:51.796 Metadata: DUP 32.00MiB 00:13:51.796 System: DUP 8.00MiB 00:13:51.796 SSD detected: yes 00:13:51.796 Zoned device: no 00:13:51.796 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:51.796 Runtime features: free-space-tree 00:13:51.796 Checksum: crc32c 00:13:51.796 Number of devices: 1 00:13:51.796 Devices: 00:13:51.796 ID SIZE PATH 00:13:51.796 1 510.00MiB /dev/nvme0n1p1 00:13:51.796 00:13:51.796 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:51.796 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4111 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:52.058 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:52.319 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:52.319 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:52.319 00:13:52.319 real 0m0.606s 00:13:52.319 user 0m0.024s 00:13:52.319 sys 0m0.137s 00:13:52.319 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.319 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:52.319 ************************************ 00:13:52.319 END TEST filesystem_in_capsule_btrfs 00:13:52.320 ************************************ 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:52.320 ************************************ 00:13:52.320 START TEST filesystem_in_capsule_xfs 00:13:52.320 ************************************ 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:52.320 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:52.320 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:52.320 = sectsz=512 attr=2, projid32bit=1 00:13:52.320 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:52.320 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:52.320 data = bsize=4096 blocks=130560, imaxpct=25 00:13:52.320 = sunit=0 swidth=0 blks 00:13:52.320 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:52.320 log =internal log bsize=4096 blocks=16384, version=2 00:13:52.320 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:52.320 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:53.259 Discarding blocks...Done. 00:13:53.259 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:53.259 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4111 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:55.171 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:55.171 00:13:55.171 real 0m2.768s 00:13:55.171 user 0m0.024s 00:13:55.171 sys 0m0.080s 00:13:55.172 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.172 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:55.172 ************************************ 00:13:55.172 END TEST filesystem_in_capsule_xfs 00:13:55.172 ************************************ 00:13:55.172 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4111 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 4111 ']' 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 4111 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.433 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4111 00:13:55.694 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.694 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.694 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4111' 00:13:55.694 killing process with pid 4111 00:13:55.694 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 4111 00:13:55.694 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 4111 00:13:55.694 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:55.694 00:13:55.694 real 0m11.515s 00:13:55.694 user 0m45.323s 00:13:55.694 sys 0m1.200s 00:13:55.694 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.694 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.694 ************************************ 00:13:55.694 END TEST nvmf_filesystem_in_capsule 00:13:55.694 ************************************ 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.956 rmmod nvme_tcp 00:13:55.956 rmmod nvme_fabrics 00:13:55.956 rmmod nvme_keyring 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.956 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.869 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.131 00:13:58.131 real 0m34.888s 00:13:58.131 user 1m40.680s 00:13:58.131 sys 0m8.043s 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.131 ************************************ 00:13:58.131 END TEST nvmf_filesystem 00:13:58.131 ************************************ 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.131 ************************************ 00:13:58.131 START TEST nvmf_target_discovery 00:13:58.131 ************************************ 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:58.131 * Looking for test storage... 00:13:58.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.131 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.132 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.725 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.725 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.725 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:14:04.987 00:14:04.987 --- 10.0.0.2 ping statistics --- 00:14:04.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.987 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:14:04.987 00:14:04.987 --- 10.0.0.1 ping statistics --- 00:14:04.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.987 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.987 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=10604 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 10604 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 10604 ']' 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.247 07:20:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:05.247 [2024-07-25 07:20:12.422111] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:14:05.247 [2024-07-25 07:20:12.422182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.247 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.247 [2024-07-25 07:20:12.495554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.247 [2024-07-25 07:20:12.571567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.247 [2024-07-25 07:20:12.571607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.247 [2024-07-25 07:20:12.571615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.248 [2024-07-25 07:20:12.571621] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.248 [2024-07-25 07:20:12.571627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.248 [2024-07-25 07:20:12.571773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.248 [2024-07-25 07:20:12.571898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.248 [2024-07-25 07:20:12.572056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.248 [2024-07-25 07:20:12.572058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 [2024-07-25 07:20:13.249165] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 Null1 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 [2024-07-25 07:20:13.309476] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 Null2 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 Null3 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 Null4 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:06.208 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.209 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.209 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.209 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:06.470 00:14:06.470 Discovery Log Number of Records 6, Generation counter 6 00:14:06.470 =====Discovery Log Entry 0====== 00:14:06.470 trtype: tcp 00:14:06.470 adrfam: ipv4 00:14:06.470 subtype: current discovery subsystem 00:14:06.470 treq: not required 00:14:06.470 portid: 0 00:14:06.470 trsvcid: 4420 00:14:06.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:06.470 traddr: 10.0.0.2 00:14:06.470 eflags: explicit discovery connections, duplicate discovery information 00:14:06.470 sectype: none 00:14:06.470 =====Discovery Log Entry 1====== 00:14:06.470 trtype: tcp 00:14:06.470 adrfam: ipv4 00:14:06.470 subtype: nvme subsystem 00:14:06.470 treq: not required 00:14:06.470 portid: 0 00:14:06.470 trsvcid: 4420 00:14:06.470 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:06.470 traddr: 10.0.0.2 00:14:06.470 eflags: none 00:14:06.470 sectype: none 00:14:06.470 =====Discovery Log Entry 2====== 00:14:06.470 trtype: tcp 00:14:06.470 adrfam: ipv4 00:14:06.470 subtype: nvme subsystem 00:14:06.470 treq: not required 00:14:06.470 portid: 0 00:14:06.470 trsvcid: 4420 00:14:06.470 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:06.470 traddr: 10.0.0.2 00:14:06.470 eflags: none 00:14:06.470 sectype: none 00:14:06.470 =====Discovery Log Entry 3====== 00:14:06.470 trtype: tcp 00:14:06.470 adrfam: ipv4 00:14:06.470 subtype: nvme subsystem 00:14:06.470 treq: not required 00:14:06.470 portid: 0 00:14:06.470 trsvcid: 4420 00:14:06.470 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:06.470 traddr: 10.0.0.2 00:14:06.470 eflags: none 00:14:06.470 sectype: none 00:14:06.470 =====Discovery Log Entry 4====== 00:14:06.470 trtype: tcp 00:14:06.470 adrfam: ipv4 00:14:06.470 subtype: nvme subsystem 00:14:06.470 treq: not required 00:14:06.470 portid: 0 00:14:06.470 trsvcid: 4420 00:14:06.470 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:06.470 traddr: 10.0.0.2 00:14:06.470 eflags: none 00:14:06.470 sectype: none 00:14:06.470 =====Discovery Log Entry 5====== 00:14:06.470 trtype: tcp 00:14:06.470 adrfam: ipv4 00:14:06.470 subtype: discovery subsystem referral 00:14:06.470 treq: not required 00:14:06.470 portid: 0 00:14:06.470 trsvcid: 4430 00:14:06.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:06.470 traddr: 10.0.0.2 00:14:06.470 eflags: none 00:14:06.470 sectype: none 00:14:06.470 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:06.470 Perform nvmf subsystem discovery via RPC 00:14:06.470 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:06.470 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.470 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.470 [ 00:14:06.470 { 00:14:06.470 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.470 "subtype": "Discovery", 00:14:06.470 "listen_addresses": [ 00:14:06.470 { 00:14:06.470 "trtype": "TCP", 00:14:06.470 "adrfam": "IPv4", 00:14:06.470 "traddr": "10.0.0.2", 00:14:06.470 "trsvcid": "4420" 00:14:06.470 } 00:14:06.470 ], 00:14:06.470 "allow_any_host": true, 00:14:06.470 "hosts": [] 00:14:06.470 }, 00:14:06.470 { 00:14:06.470 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.470 "subtype": "NVMe", 00:14:06.470 "listen_addresses": [ 00:14:06.470 { 00:14:06.470 "trtype": "TCP", 00:14:06.470 "adrfam": "IPv4", 00:14:06.470 "traddr": "10.0.0.2", 00:14:06.470 "trsvcid": "4420" 00:14:06.470 } 00:14:06.470 ], 00:14:06.470 "allow_any_host": true, 00:14:06.470 "hosts": [], 00:14:06.470 "serial_number": "SPDK00000000000001", 00:14:06.470 "model_number": "SPDK bdev Controller", 00:14:06.470 "max_namespaces": 32, 00:14:06.470 "min_cntlid": 1, 00:14:06.470 "max_cntlid": 65519, 00:14:06.470 "namespaces": [ 00:14:06.470 { 00:14:06.470 "nsid": 1, 00:14:06.470 "bdev_name": "Null1", 00:14:06.470 "name": "Null1", 00:14:06.470 "nguid": "FD8A0086D598450F8537418161A35153", 00:14:06.471 "uuid": "fd8a0086-d598-450f-8537-418161a35153" 00:14:06.471 } 00:14:06.471 ] 00:14:06.471 }, 00:14:06.471 { 00:14:06.471 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:06.471 "subtype": "NVMe", 00:14:06.471 "listen_addresses": [ 00:14:06.471 { 00:14:06.471 "trtype": "TCP", 00:14:06.471 "adrfam": "IPv4", 00:14:06.471 "traddr": "10.0.0.2", 00:14:06.471 "trsvcid": "4420" 00:14:06.471 } 00:14:06.471 ], 00:14:06.471 "allow_any_host": true, 00:14:06.471 "hosts": [], 00:14:06.471 "serial_number": "SPDK00000000000002", 00:14:06.471 "model_number": "SPDK bdev Controller", 00:14:06.471 "max_namespaces": 32, 00:14:06.471 "min_cntlid": 1, 00:14:06.471 "max_cntlid": 65519, 00:14:06.471 "namespaces": [ 00:14:06.471 { 00:14:06.471 "nsid": 1, 00:14:06.471 "bdev_name": "Null2", 00:14:06.471 "name": "Null2", 00:14:06.471 "nguid": "7069EA4A7717430C81061CB129BDC854", 00:14:06.471 "uuid": "7069ea4a-7717-430c-8106-1cb129bdc854" 00:14:06.471 } 00:14:06.471 ] 00:14:06.471 }, 00:14:06.471 { 00:14:06.471 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:06.471 "subtype": "NVMe", 00:14:06.471 "listen_addresses": [ 00:14:06.471 { 00:14:06.471 "trtype": "TCP", 00:14:06.471 "adrfam": "IPv4", 00:14:06.471 "traddr": "10.0.0.2", 00:14:06.471 "trsvcid": "4420" 00:14:06.471 } 00:14:06.471 ], 00:14:06.471 "allow_any_host": true, 00:14:06.471 "hosts": [], 00:14:06.471 "serial_number": "SPDK00000000000003", 00:14:06.471 "model_number": "SPDK bdev Controller", 00:14:06.471 "max_namespaces": 32, 00:14:06.471 "min_cntlid": 1, 00:14:06.471 "max_cntlid": 65519, 00:14:06.471 "namespaces": [ 00:14:06.471 { 00:14:06.471 "nsid": 1, 00:14:06.471 "bdev_name": "Null3", 00:14:06.471 "name": "Null3", 00:14:06.471 "nguid": "284EF141DA9D46EB99C9506AE70DDFFD", 00:14:06.471 "uuid": "284ef141-da9d-46eb-99c9-506ae70ddffd" 00:14:06.471 } 00:14:06.471 ] 00:14:06.471 }, 00:14:06.471 { 00:14:06.471 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:06.471 "subtype": "NVMe", 00:14:06.471 "listen_addresses": [ 00:14:06.471 { 00:14:06.471 "trtype": "TCP", 00:14:06.471 "adrfam": "IPv4", 00:14:06.471 "traddr": "10.0.0.2", 00:14:06.471 "trsvcid": "4420" 00:14:06.471 } 00:14:06.471 ], 00:14:06.471 "allow_any_host": true, 00:14:06.471 "hosts": [], 00:14:06.471 "serial_number": "SPDK00000000000004", 00:14:06.471 "model_number": "SPDK bdev Controller", 00:14:06.471 "max_namespaces": 32, 00:14:06.471 "min_cntlid": 1, 00:14:06.471 "max_cntlid": 65519, 00:14:06.471 "namespaces": [ 00:14:06.471 { 00:14:06.471 "nsid": 1, 00:14:06.471 "bdev_name": "Null4", 00:14:06.471 "name": "Null4", 00:14:06.471 "nguid": "D00CF198925F49AD86D59374F9F8EA8E", 00:14:06.471 "uuid": "d00cf198-925f-49ad-86d5-9374f9f8ea8e" 00:14:06.471 } 00:14:06.471 ] 00:14:06.471 } 00:14:06.471 ] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.471 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.471 rmmod nvme_tcp 00:14:06.471 rmmod nvme_fabrics 00:14:06.471 rmmod nvme_keyring 00:14:06.732 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 10604 ']' 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 10604 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 10604 ']' 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 10604 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 10604 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 10604' 00:14:06.733 killing process with pid 10604 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 10604 00:14:06.733 07:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 10604 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.733 07:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.285 00:14:09.285 real 0m10.814s 00:14:09.285 user 0m8.010s 00:14:09.285 sys 0m5.524s 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.285 ************************************ 00:14:09.285 END TEST nvmf_target_discovery 00:14:09.285 ************************************ 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.285 ************************************ 00:14:09.285 START TEST nvmf_referrals 00:14:09.285 ************************************ 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:09.285 * Looking for test storage... 00:14:09.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.285 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.286 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.909 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:15.910 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:15.910 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:15.910 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:15.910 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.910 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:14:16.171 00:14:16.171 --- 10.0.0.2 ping statistics --- 00:14:16.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.171 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:14:16.171 00:14:16.171 --- 10.0.0.1 ping statistics --- 00:14:16.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.171 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=15171 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 15171 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 15171 ']' 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.171 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:16.432 [2024-07-25 07:20:23.566190] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:14:16.432 [2024-07-25 07:20:23.566262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.432 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.432 [2024-07-25 07:20:23.638152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.432 [2024-07-25 07:20:23.713258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.432 [2024-07-25 07:20:23.713298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.432 [2024-07-25 07:20:23.713306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.432 [2024-07-25 07:20:23.713312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.432 [2024-07-25 07:20:23.713318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.432 [2024-07-25 07:20:23.713402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.432 [2024-07-25 07:20:23.713533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.432 [2024-07-25 07:20:23.713690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.432 [2024-07-25 07:20:23.713692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.007 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.007 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:17.007 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.007 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.007 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 [2024-07-25 07:20:24.395166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 [2024-07-25 07:20:24.411355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.268 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:17.269 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:17.530 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:17.791 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:17.791 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:17.791 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:17.791 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.791 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:17.791 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:18.052 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:18.312 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:18.573 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:18.834 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.095 rmmod nvme_tcp 00:14:19.095 rmmod nvme_fabrics 00:14:19.095 rmmod nvme_keyring 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 15171 ']' 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 15171 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 15171 ']' 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 15171 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 15171 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 15171' 00:14:19.095 killing process with pid 15171 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 15171 00:14:19.095 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 15171 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.356 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.269 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.269 00:14:21.269 real 0m12.381s 00:14:21.269 user 0m14.102s 00:14:21.269 sys 0m6.052s 00:14:21.269 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.269 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:21.269 ************************************ 00:14:21.269 END TEST nvmf_referrals 00:14:21.269 ************************************ 00:14:21.269 07:20:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:21.269 07:20:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.269 07:20:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.269 07:20:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.531 ************************************ 00:14:21.531 START TEST nvmf_connect_disconnect 00:14:21.531 ************************************ 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:21.531 * Looking for test storage... 00:14:21.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.531 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.532 07:20:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:29.681 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:29.681 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.681 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:29.681 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:29.682 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.682 07:20:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:14:29.682 00:14:29.682 --- 10.0.0.2 ping statistics --- 00:14:29.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.682 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:14:29.682 00:14:29.682 --- 10.0.0.1 ping statistics --- 00:14:29.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.682 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=20031 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 20031 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 20031 ']' 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 [2024-07-25 07:20:36.173473] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:14:29.682 [2024-07-25 07:20:36.173566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.682 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.682 [2024-07-25 07:20:36.247286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.682 [2024-07-25 07:20:36.321558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.682 [2024-07-25 07:20:36.321596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.682 [2024-07-25 07:20:36.321604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.682 [2024-07-25 07:20:36.321611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.682 [2024-07-25 07:20:36.321617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.682 [2024-07-25 07:20:36.321753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.682 [2024-07-25 07:20:36.321877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.682 [2024-07-25 07:20:36.322038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.682 [2024-07-25 07:20:36.322040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.682 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 [2024-07-25 07:20:37.004162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.683 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:29.683 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.683 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:29.944 [2024-07-25 07:20:37.063587] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:29.944 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:34.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.306 rmmod nvme_tcp 00:14:48.306 rmmod nvme_fabrics 00:14:48.306 rmmod nvme_keyring 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 20031 ']' 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 20031 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 20031 ']' 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 20031 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 20031 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 20031' 00:14:48.306 killing process with pid 20031 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 20031 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 20031 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.306 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.862 00:14:50.862 real 0m29.079s 00:14:50.862 user 1m19.201s 00:14:50.862 sys 0m6.620s 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:50.862 ************************************ 00:14:50.862 END TEST nvmf_connect_disconnect 00:14:50.862 ************************************ 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.862 ************************************ 00:14:50.862 START TEST nvmf_multitarget 00:14:50.862 ************************************ 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:50.862 * Looking for test storage... 00:14:50.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:50.862 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:50.863 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.863 07:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:59.013 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:59.013 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:59.013 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:59.013 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.013 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.014 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.014 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.014 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.014 07:21:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:14:59.014 00:14:59.014 --- 10.0.0.2 ping statistics --- 00:14:59.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.014 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:14:59.014 00:14:59.014 --- 10.0.0.1 ping statistics --- 00:14:59.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.014 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=28044 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 28044 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 28044 ']' 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.014 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:59.014 [2024-07-25 07:21:05.371487] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:14:59.014 [2024-07-25 07:21:05.371551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.014 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.014 [2024-07-25 07:21:05.443090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.014 [2024-07-25 07:21:05.518182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.014 [2024-07-25 07:21:05.518226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.014 [2024-07-25 07:21:05.518234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.014 [2024-07-25 07:21:05.518241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.014 [2024-07-25 07:21:05.518247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.014 [2024-07-25 07:21:05.518314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.014 [2024-07-25 07:21:05.518449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.014 [2024-07-25 07:21:05.518606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.014 [2024-07-25 07:21:05.518606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:59.014 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:59.014 "nvmf_tgt_1" 00:14:59.275 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:59.275 "nvmf_tgt_2" 00:14:59.275 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.275 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:59.275 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:59.275 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:59.535 true 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:59.535 true 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.535 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.795 rmmod nvme_tcp 00:14:59.795 rmmod nvme_fabrics 00:14:59.795 rmmod nvme_keyring 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 28044 ']' 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 28044 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 28044 ']' 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 28044 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.795 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 28044 00:14:59.795 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.795 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.795 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 28044' 00:14:59.795 killing process with pid 28044 00:14:59.795 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 28044 00:14:59.795 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 28044 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.796 07:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.343 00:15:02.343 real 0m11.422s 00:15:02.343 user 0m9.320s 00:15:02.343 sys 0m5.943s 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:02.343 ************************************ 00:15:02.343 END TEST nvmf_multitarget 00:15:02.343 ************************************ 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.343 ************************************ 00:15:02.343 START TEST nvmf_rpc 00:15:02.343 ************************************ 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:02.343 * Looking for test storage... 00:15:02.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.343 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.344 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.344 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.344 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.344 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.344 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.344 07:21:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.498 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:10.499 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:10.499 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:10.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:10.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:10.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:15:10.499 00:15:10.499 --- 10.0.0.2 ping statistics --- 00:15:10.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.499 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:15:10.499 00:15:10.499 --- 10.0.0.1 ping statistics --- 00:15:10.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.499 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=33093 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 33093 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 33093 ']' 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.499 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.499 [2024-07-25 07:21:16.760552] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:15:10.499 [2024-07-25 07:21:16.760608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.499 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.499 [2024-07-25 07:21:16.832272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.499 [2024-07-25 07:21:16.903635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.499 [2024-07-25 07:21:16.903672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.499 [2024-07-25 07:21:16.903679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.499 [2024-07-25 07:21:16.903686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.499 [2024-07-25 07:21:16.903692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.499 [2024-07-25 07:21:16.903830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.499 [2024-07-25 07:21:16.903944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.500 [2024-07-25 07:21:16.904099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.500 [2024-07-25 07:21:16.904100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:10.500 "tick_rate": 2400000000, 00:15:10.500 "poll_groups": [ 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_000", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [] 00:15:10.500 }, 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_001", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [] 00:15:10.500 }, 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_002", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [] 00:15:10.500 }, 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_003", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [] 00:15:10.500 } 00:15:10.500 ] 00:15:10.500 }' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 [2024-07-25 07:21:17.693578] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:10.500 "tick_rate": 2400000000, 00:15:10.500 "poll_groups": [ 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_000", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [ 00:15:10.500 { 00:15:10.500 "trtype": "TCP" 00:15:10.500 } 00:15:10.500 ] 00:15:10.500 }, 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_001", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [ 00:15:10.500 { 00:15:10.500 "trtype": "TCP" 00:15:10.500 } 00:15:10.500 ] 00:15:10.500 }, 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_002", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [ 00:15:10.500 { 00:15:10.500 "trtype": "TCP" 00:15:10.500 } 00:15:10.500 ] 00:15:10.500 }, 00:15:10.500 { 00:15:10.500 "name": "nvmf_tgt_poll_group_003", 00:15:10.500 "admin_qpairs": 0, 00:15:10.500 "io_qpairs": 0, 00:15:10.500 "current_admin_qpairs": 0, 00:15:10.500 "current_io_qpairs": 0, 00:15:10.500 "pending_bdev_io": 0, 00:15:10.500 "completed_nvme_io": 0, 00:15:10.500 "transports": [ 00:15:10.500 { 00:15:10.500 "trtype": "TCP" 00:15:10.500 } 00:15:10.500 ] 00:15:10.500 } 00:15:10.500 ] 00:15:10.500 }' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 Malloc1 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.500 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.762 [2024-07-25 07:21:17.881277] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:10.762 [2024-07-25 07:21:17.907945] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:10.762 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:10.762 could not add new controller: failed to write to nvme-fabrics device 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.762 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.148 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.148 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:12.148 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.148 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:12.148 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:14.692 [2024-07-25 07:21:21.674117] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:14.692 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:14.692 could not add new controller: failed to write to nvme-fabrics device 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:14.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:14.693 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:14.693 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.693 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.693 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.693 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:16.078 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:16.078 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:16.078 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.078 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:16.078 07:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:17.999 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:17.999 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:17.999 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.999 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:17.999 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.999 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:17.999 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.000 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.261 [2024-07-25 07:21:25.394334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.261 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.647 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.647 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:19.647 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.647 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:19.647 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:21.562 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:21.562 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:21.562 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.824 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:21.824 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.824 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:21.824 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.824 [2024-07-25 07:21:29.120027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.824 07:21:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:23.738 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.738 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:23.738 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.738 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:23.738 07:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.655 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 [2024-07-25 07:21:32.861188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 07:21:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:27.042 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:27.042 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:27.042 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.042 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:27.042 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.591 [2024-07-25 07:21:36.588314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.591 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:29.592 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.592 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:29.592 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.592 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:30.977 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:30.977 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:30.977 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.977 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:30.977 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.891 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.891 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.891 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.891 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:32.891 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.891 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:32.891 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.152 [2024-07-25 07:21:40.385719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.152 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:35.070 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.070 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:35.070 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.070 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:35.070 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:36.983 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:36.983 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:36.983 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.983 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:36.983 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:36.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 [2024-07-25 07:21:44.156524] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 [2024-07-25 07:21:44.216657] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 [2024-07-25 07:21:44.280842] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.984 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.985 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.985 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.985 [2024-07-25 07:21:44.341048] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.985 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.985 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.985 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.985 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 [2024-07-25 07:21:44.397270] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.246 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:37.247 "tick_rate": 2400000000, 00:15:37.247 "poll_groups": [ 00:15:37.247 { 00:15:37.247 "name": "nvmf_tgt_poll_group_000", 00:15:37.247 "admin_qpairs": 0, 00:15:37.247 "io_qpairs": 224, 00:15:37.247 "current_admin_qpairs": 0, 00:15:37.247 "current_io_qpairs": 0, 00:15:37.247 "pending_bdev_io": 0, 00:15:37.247 "completed_nvme_io": 473, 00:15:37.247 "transports": [ 00:15:37.247 { 00:15:37.247 "trtype": "TCP" 00:15:37.247 } 00:15:37.247 ] 00:15:37.247 }, 00:15:37.247 { 00:15:37.247 "name": "nvmf_tgt_poll_group_001", 00:15:37.247 "admin_qpairs": 1, 00:15:37.247 "io_qpairs": 223, 00:15:37.247 "current_admin_qpairs": 0, 00:15:37.247 "current_io_qpairs": 0, 00:15:37.247 "pending_bdev_io": 0, 00:15:37.247 "completed_nvme_io": 223, 00:15:37.247 "transports": [ 00:15:37.247 { 00:15:37.247 "trtype": "TCP" 00:15:37.247 } 00:15:37.247 ] 00:15:37.247 }, 00:15:37.247 { 00:15:37.247 "name": "nvmf_tgt_poll_group_002", 00:15:37.247 "admin_qpairs": 6, 00:15:37.247 "io_qpairs": 218, 00:15:37.247 "current_admin_qpairs": 0, 00:15:37.247 "current_io_qpairs": 0, 00:15:37.247 "pending_bdev_io": 0, 00:15:37.247 "completed_nvme_io": 220, 00:15:37.247 "transports": [ 00:15:37.247 { 00:15:37.247 "trtype": "TCP" 00:15:37.247 } 00:15:37.247 ] 00:15:37.247 }, 00:15:37.247 { 00:15:37.247 "name": "nvmf_tgt_poll_group_003", 00:15:37.247 "admin_qpairs": 0, 00:15:37.247 "io_qpairs": 224, 00:15:37.247 "current_admin_qpairs": 0, 00:15:37.247 "current_io_qpairs": 0, 00:15:37.247 "pending_bdev_io": 0, 00:15:37.247 "completed_nvme_io": 323, 00:15:37.247 "transports": [ 00:15:37.247 { 00:15:37.247 "trtype": "TCP" 00:15:37.247 } 00:15:37.247 ] 00:15:37.247 } 00:15:37.247 ] 00:15:37.247 }' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.247 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.247 rmmod nvme_tcp 00:15:37.247 rmmod nvme_fabrics 00:15:37.247 rmmod nvme_keyring 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 33093 ']' 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 33093 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 33093 ']' 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 33093 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 33093 00:15:37.507 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 33093' 00:15:37.508 killing process with pid 33093 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 33093 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 33093 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.508 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.060 00:15:40.060 real 0m37.597s 00:15:40.060 user 1m53.733s 00:15:40.060 sys 0m7.179s 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.060 ************************************ 00:15:40.060 END TEST nvmf_rpc 00:15:40.060 ************************************ 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.060 ************************************ 00:15:40.060 START TEST nvmf_invalid 00:15:40.060 ************************************ 00:15:40.060 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:40.060 * Looking for test storage... 00:15:40.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.060 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.061 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:46.698 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:46.698 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.698 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:46.699 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:46.699 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:46.699 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.699 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:46.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:15:46.960 00:15:46.960 --- 10.0.0.2 ping statistics --- 00:15:46.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.960 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:15:46.960 00:15:46.960 --- 10.0.0.1 ping statistics --- 00:15:46.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.960 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=42634 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 42634 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 42634 ']' 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.960 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:46.960 [2024-07-25 07:21:54.212341] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:15:46.961 [2024-07-25 07:21:54.212394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.961 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.961 [2024-07-25 07:21:54.281085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.222 [2024-07-25 07:21:54.350436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.222 [2024-07-25 07:21:54.350476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.222 [2024-07-25 07:21:54.350484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.222 [2024-07-25 07:21:54.350491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.222 [2024-07-25 07:21:54.350497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.222 [2024-07-25 07:21:54.350644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.222 [2024-07-25 07:21:54.350765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.222 [2024-07-25 07:21:54.350922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.222 [2024-07-25 07:21:54.350923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.794 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.794 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:47.794 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.794 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:47.794 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.794 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.794 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:47.794 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17092 00:15:48.057 [2024-07-25 07:21:55.182742] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:48.057 { 00:15:48.057 "nqn": "nqn.2016-06.io.spdk:cnode17092", 00:15:48.057 "tgt_name": "foobar", 00:15:48.057 "method": "nvmf_create_subsystem", 00:15:48.057 "req_id": 1 00:15:48.057 } 00:15:48.057 Got JSON-RPC error response 00:15:48.057 response: 00:15:48.057 { 00:15:48.057 "code": -32603, 00:15:48.057 "message": "Unable to find target foobar" 00:15:48.057 }' 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:48.057 { 00:15:48.057 "nqn": "nqn.2016-06.io.spdk:cnode17092", 00:15:48.057 "tgt_name": "foobar", 00:15:48.057 "method": "nvmf_create_subsystem", 00:15:48.057 "req_id": 1 00:15:48.057 } 00:15:48.057 Got JSON-RPC error response 00:15:48.057 response: 00:15:48.057 { 00:15:48.057 "code": -32603, 00:15:48.057 "message": "Unable to find target foobar" 00:15:48.057 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18605 00:15:48.057 [2024-07-25 07:21:55.359376] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18605: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:48.057 { 00:15:48.057 "nqn": "nqn.2016-06.io.spdk:cnode18605", 00:15:48.057 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.057 "method": "nvmf_create_subsystem", 00:15:48.057 "req_id": 1 00:15:48.057 } 00:15:48.057 Got JSON-RPC error response 00:15:48.057 response: 00:15:48.057 { 00:15:48.057 "code": -32602, 00:15:48.057 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.057 }' 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:48.057 { 00:15:48.057 "nqn": "nqn.2016-06.io.spdk:cnode18605", 00:15:48.057 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.057 "method": "nvmf_create_subsystem", 00:15:48.057 "req_id": 1 00:15:48.057 } 00:15:48.057 Got JSON-RPC error response 00:15:48.057 response: 00:15:48.057 { 00:15:48.057 "code": -32602, 00:15:48.057 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.057 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:48.057 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16154 00:15:48.319 [2024-07-25 07:21:55.531904] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16154: invalid model number 'SPDK_Controller' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:48.319 { 00:15:48.319 "nqn": "nqn.2016-06.io.spdk:cnode16154", 00:15:48.319 "model_number": "SPDK_Controller\u001f", 00:15:48.319 "method": "nvmf_create_subsystem", 00:15:48.319 "req_id": 1 00:15:48.319 } 00:15:48.319 Got JSON-RPC error response 00:15:48.319 response: 00:15:48.319 { 00:15:48.319 "code": -32602, 00:15:48.319 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.319 }' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:48.319 { 00:15:48.319 "nqn": "nqn.2016-06.io.spdk:cnode16154", 00:15:48.319 "model_number": "SPDK_Controller\u001f", 00:15:48.319 "method": "nvmf_create_subsystem", 00:15:48.319 "req_id": 1 00:15:48.319 } 00:15:48.319 Got JSON-RPC error response 00:15:48.319 response: 00:15:48.319 { 00:15:48.319 "code": -32602, 00:15:48.319 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.319 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:48.319 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.320 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:48.582 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:15:48.583 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'D'\''X@]@]P+m /dev/null' 00:15:50.937 07:21:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.852 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:52.852 00:15:52.852 real 0m13.230s 00:15:52.852 user 0m19.147s 00:15:52.852 sys 0m6.249s 00:15:52.852 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.852 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:52.852 ************************************ 00:15:52.852 END TEST nvmf_invalid 00:15:52.852 ************************************ 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.114 ************************************ 00:15:53.114 START TEST nvmf_connect_stress 00:15:53.114 ************************************ 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:53.114 * Looking for test storage... 00:15:53.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.114 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.115 07:22:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:01.262 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:01.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:01.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:01.262 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.262 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:16:01.263 00:16:01.263 --- 10.0.0.2 ping statistics --- 00:16:01.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.263 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:16:01.263 00:16:01.263 --- 10.0.0.1 ping statistics --- 00:16:01.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.263 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=47776 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 47776 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 47776 ']' 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.263 07:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.263 [2024-07-25 07:22:07.790851] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:16:01.263 [2024-07-25 07:22:07.790919] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.263 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.263 [2024-07-25 07:22:07.879003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:01.263 [2024-07-25 07:22:07.972999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.263 [2024-07-25 07:22:07.973059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.263 [2024-07-25 07:22:07.973067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.263 [2024-07-25 07:22:07.973074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.263 [2024-07-25 07:22:07.973080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.263 [2024-07-25 07:22:07.973252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.263 [2024-07-25 07:22:07.973471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.263 [2024-07-25 07:22:07.973471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.263 [2024-07-25 07:22:08.606362] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.263 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.525 [2024-07-25 07:22:08.639186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.525 NULL1 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=48065 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.525 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.786 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.786 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:01.786 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:01.786 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.786 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.047 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.047 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:02.047 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.047 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.047 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.619 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.619 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:02.619 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.619 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.619 07:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.880 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.880 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:02.880 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.880 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.880 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.142 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.142 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:03.142 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.142 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.142 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.403 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.403 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:03.403 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.403 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.403 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.976 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.976 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:03.976 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.976 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.976 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.238 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.238 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:04.238 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.238 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.238 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.499 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.499 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:04.499 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.499 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.499 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.761 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.761 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:04.761 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.761 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.761 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.022 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.022 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:05.022 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.022 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.022 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.594 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.594 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:05.594 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.594 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.594 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.855 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.855 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:05.855 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.855 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.855 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.117 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.117 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:06.117 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.117 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.117 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.417 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.417 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:06.417 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.417 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.417 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.678 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.678 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:06.678 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.678 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.678 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.938 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.939 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:06.939 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.939 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.939 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.508 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.509 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:07.509 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.509 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.509 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.769 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.769 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:07.769 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.769 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.769 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.029 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.029 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:08.029 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.029 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.029 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.290 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.290 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:08.290 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.290 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.290 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.551 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.551 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:08.551 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.551 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.551 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.121 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.121 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:09.121 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.121 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.121 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.382 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.382 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:09.382 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.382 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.382 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.642 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.642 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:09.642 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.642 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.642 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.903 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.903 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:09.903 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.903 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.903 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.475 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.475 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:10.475 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.475 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.475 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.735 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.735 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:10.735 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.735 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.735 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.995 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.995 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:10.995 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.995 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.995 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.258 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.258 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:11.258 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.258 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.258 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.518 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 48065 00:16:11.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (48065) - No such process 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 48065 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.518 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.518 rmmod nvme_tcp 00:16:11.518 rmmod nvme_fabrics 00:16:11.779 rmmod nvme_keyring 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 47776 ']' 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 47776 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 47776 ']' 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 47776 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 47776 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 47776' 00:16:11.779 killing process with pid 47776 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 47776 00:16:11.779 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 47776 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.779 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.326 00:16:14.326 real 0m20.882s 00:16:14.326 user 0m42.028s 00:16:14.326 sys 0m8.676s 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.326 ************************************ 00:16:14.326 END TEST nvmf_connect_stress 00:16:14.326 ************************************ 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:14.326 ************************************ 00:16:14.326 START TEST nvmf_fused_ordering 00:16:14.326 ************************************ 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:14.326 * Looking for test storage... 00:16:14.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.326 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.327 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:20.927 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.927 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.927 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.927 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.927 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.927 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.927 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:20.928 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:20.928 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:20.928 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:20.928 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.928 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:16:21.190 00:16:21.190 --- 10.0.0.2 ping statistics --- 00:16:21.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.190 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:16:21.190 00:16:21.190 --- 10.0.0.1 ping statistics --- 00:16:21.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.190 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=54151 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 54151 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 54151 ']' 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.190 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:21.190 [2024-07-25 07:22:28.443211] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:16:21.190 [2024-07-25 07:22:28.443261] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.190 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.190 [2024-07-25 07:22:28.526264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.451 [2024-07-25 07:22:28.589453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.451 [2024-07-25 07:22:28.589489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.451 [2024-07-25 07:22:28.589500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.451 [2024-07-25 07:22:28.589507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.451 [2024-07-25 07:22:28.589512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.451 [2024-07-25 07:22:28.589537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.024 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.025 [2024-07-25 07:22:29.251759] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.025 [2024-07-25 07:22:29.268032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.025 NULL1 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.025 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:22.025 [2024-07-25 07:22:29.326127] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:16:22.025 [2024-07-25 07:22:29.326196] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54352 ] 00:16:22.025 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.598 Attached to nqn.2016-06.io.spdk:cnode1 00:16:22.598 Namespace ID: 1 size: 1GB 00:16:22.598 fused_ordering(0) 00:16:22.598 fused_ordering(1) 00:16:22.598 fused_ordering(2) 00:16:22.598 fused_ordering(3) 00:16:22.598 fused_ordering(4) 00:16:22.598 fused_ordering(5) 00:16:22.598 fused_ordering(6) 00:16:22.598 fused_ordering(7) 00:16:22.598 fused_ordering(8) 00:16:22.598 fused_ordering(9) 00:16:22.598 fused_ordering(10) 00:16:22.598 fused_ordering(11) 00:16:22.598 fused_ordering(12) 00:16:22.598 fused_ordering(13) 00:16:22.598 fused_ordering(14) 00:16:22.598 fused_ordering(15) 00:16:22.598 fused_ordering(16) 00:16:22.598 fused_ordering(17) 00:16:22.598 fused_ordering(18) 00:16:22.598 fused_ordering(19) 00:16:22.598 fused_ordering(20) 00:16:22.598 fused_ordering(21) 00:16:22.598 fused_ordering(22) 00:16:22.598 fused_ordering(23) 00:16:22.598 fused_ordering(24) 00:16:22.598 fused_ordering(25) 00:16:22.598 fused_ordering(26) 00:16:22.598 fused_ordering(27) 00:16:22.598 fused_ordering(28) 00:16:22.598 fused_ordering(29) 00:16:22.598 fused_ordering(30) 00:16:22.598 fused_ordering(31) 00:16:22.598 fused_ordering(32) 00:16:22.598 fused_ordering(33) 00:16:22.598 fused_ordering(34) 00:16:22.598 fused_ordering(35) 00:16:22.598 fused_ordering(36) 00:16:22.598 fused_ordering(37) 00:16:22.598 fused_ordering(38) 00:16:22.598 fused_ordering(39) 00:16:22.598 fused_ordering(40) 00:16:22.598 fused_ordering(41) 00:16:22.598 fused_ordering(42) 00:16:22.598 fused_ordering(43) 00:16:22.598 fused_ordering(44) 00:16:22.598 fused_ordering(45) 00:16:22.598 fused_ordering(46) 00:16:22.598 fused_ordering(47) 00:16:22.598 fused_ordering(48) 00:16:22.598 fused_ordering(49) 00:16:22.598 fused_ordering(50) 00:16:22.598 fused_ordering(51) 00:16:22.598 fused_ordering(52) 00:16:22.598 fused_ordering(53) 00:16:22.598 fused_ordering(54) 00:16:22.598 fused_ordering(55) 00:16:22.598 fused_ordering(56) 00:16:22.598 fused_ordering(57) 00:16:22.598 fused_ordering(58) 00:16:22.598 fused_ordering(59) 00:16:22.598 fused_ordering(60) 00:16:22.598 fused_ordering(61) 00:16:22.598 fused_ordering(62) 00:16:22.598 fused_ordering(63) 00:16:22.598 fused_ordering(64) 00:16:22.598 fused_ordering(65) 00:16:22.598 fused_ordering(66) 00:16:22.598 fused_ordering(67) 00:16:22.598 fused_ordering(68) 00:16:22.598 fused_ordering(69) 00:16:22.598 fused_ordering(70) 00:16:22.598 fused_ordering(71) 00:16:22.598 fused_ordering(72) 00:16:22.598 fused_ordering(73) 00:16:22.598 fused_ordering(74) 00:16:22.598 fused_ordering(75) 00:16:22.598 fused_ordering(76) 00:16:22.598 fused_ordering(77) 00:16:22.598 fused_ordering(78) 00:16:22.598 fused_ordering(79) 00:16:22.598 fused_ordering(80) 00:16:22.598 fused_ordering(81) 00:16:22.598 fused_ordering(82) 00:16:22.598 fused_ordering(83) 00:16:22.598 fused_ordering(84) 00:16:22.598 fused_ordering(85) 00:16:22.598 fused_ordering(86) 00:16:22.598 fused_ordering(87) 00:16:22.598 fused_ordering(88) 00:16:22.598 fused_ordering(89) 00:16:22.598 fused_ordering(90) 00:16:22.598 fused_ordering(91) 00:16:22.598 fused_ordering(92) 00:16:22.598 fused_ordering(93) 00:16:22.598 fused_ordering(94) 00:16:22.598 fused_ordering(95) 00:16:22.598 fused_ordering(96) 00:16:22.598 fused_ordering(97) 00:16:22.598 fused_ordering(98) 00:16:22.598 fused_ordering(99) 00:16:22.598 fused_ordering(100) 00:16:22.598 fused_ordering(101) 00:16:22.598 fused_ordering(102) 00:16:22.598 fused_ordering(103) 00:16:22.598 fused_ordering(104) 00:16:22.598 fused_ordering(105) 00:16:22.598 fused_ordering(106) 00:16:22.598 fused_ordering(107) 00:16:22.598 fused_ordering(108) 00:16:22.598 fused_ordering(109) 00:16:22.598 fused_ordering(110) 00:16:22.598 fused_ordering(111) 00:16:22.598 fused_ordering(112) 00:16:22.598 fused_ordering(113) 00:16:22.598 fused_ordering(114) 00:16:22.598 fused_ordering(115) 00:16:22.598 fused_ordering(116) 00:16:22.598 fused_ordering(117) 00:16:22.598 fused_ordering(118) 00:16:22.598 fused_ordering(119) 00:16:22.598 fused_ordering(120) 00:16:22.598 fused_ordering(121) 00:16:22.598 fused_ordering(122) 00:16:22.598 fused_ordering(123) 00:16:22.598 fused_ordering(124) 00:16:22.598 fused_ordering(125) 00:16:22.598 fused_ordering(126) 00:16:22.598 fused_ordering(127) 00:16:22.598 fused_ordering(128) 00:16:22.598 fused_ordering(129) 00:16:22.598 fused_ordering(130) 00:16:22.598 fused_ordering(131) 00:16:22.598 fused_ordering(132) 00:16:22.598 fused_ordering(133) 00:16:22.598 fused_ordering(134) 00:16:22.598 fused_ordering(135) 00:16:22.598 fused_ordering(136) 00:16:22.598 fused_ordering(137) 00:16:22.598 fused_ordering(138) 00:16:22.598 fused_ordering(139) 00:16:22.598 fused_ordering(140) 00:16:22.598 fused_ordering(141) 00:16:22.598 fused_ordering(142) 00:16:22.598 fused_ordering(143) 00:16:22.598 fused_ordering(144) 00:16:22.598 fused_ordering(145) 00:16:22.598 fused_ordering(146) 00:16:22.598 fused_ordering(147) 00:16:22.598 fused_ordering(148) 00:16:22.598 fused_ordering(149) 00:16:22.598 fused_ordering(150) 00:16:22.598 fused_ordering(151) 00:16:22.598 fused_ordering(152) 00:16:22.598 fused_ordering(153) 00:16:22.598 fused_ordering(154) 00:16:22.598 fused_ordering(155) 00:16:22.598 fused_ordering(156) 00:16:22.598 fused_ordering(157) 00:16:22.598 fused_ordering(158) 00:16:22.598 fused_ordering(159) 00:16:22.598 fused_ordering(160) 00:16:22.598 fused_ordering(161) 00:16:22.598 fused_ordering(162) 00:16:22.598 fused_ordering(163) 00:16:22.598 fused_ordering(164) 00:16:22.598 fused_ordering(165) 00:16:22.598 fused_ordering(166) 00:16:22.598 fused_ordering(167) 00:16:22.598 fused_ordering(168) 00:16:22.598 fused_ordering(169) 00:16:22.598 fused_ordering(170) 00:16:22.598 fused_ordering(171) 00:16:22.599 fused_ordering(172) 00:16:22.599 fused_ordering(173) 00:16:22.599 fused_ordering(174) 00:16:22.599 fused_ordering(175) 00:16:22.599 fused_ordering(176) 00:16:22.599 fused_ordering(177) 00:16:22.599 fused_ordering(178) 00:16:22.599 fused_ordering(179) 00:16:22.599 fused_ordering(180) 00:16:22.599 fused_ordering(181) 00:16:22.599 fused_ordering(182) 00:16:22.599 fused_ordering(183) 00:16:22.599 fused_ordering(184) 00:16:22.599 fused_ordering(185) 00:16:22.599 fused_ordering(186) 00:16:22.599 fused_ordering(187) 00:16:22.599 fused_ordering(188) 00:16:22.599 fused_ordering(189) 00:16:22.599 fused_ordering(190) 00:16:22.599 fused_ordering(191) 00:16:22.599 fused_ordering(192) 00:16:22.599 fused_ordering(193) 00:16:22.599 fused_ordering(194) 00:16:22.599 fused_ordering(195) 00:16:22.599 fused_ordering(196) 00:16:22.599 fused_ordering(197) 00:16:22.599 fused_ordering(198) 00:16:22.599 fused_ordering(199) 00:16:22.599 fused_ordering(200) 00:16:22.599 fused_ordering(201) 00:16:22.599 fused_ordering(202) 00:16:22.599 fused_ordering(203) 00:16:22.599 fused_ordering(204) 00:16:22.599 fused_ordering(205) 00:16:23.172 fused_ordering(206) 00:16:23.172 fused_ordering(207) 00:16:23.172 fused_ordering(208) 00:16:23.172 fused_ordering(209) 00:16:23.172 fused_ordering(210) 00:16:23.172 fused_ordering(211) 00:16:23.172 fused_ordering(212) 00:16:23.172 fused_ordering(213) 00:16:23.172 fused_ordering(214) 00:16:23.172 fused_ordering(215) 00:16:23.172 fused_ordering(216) 00:16:23.172 fused_ordering(217) 00:16:23.172 fused_ordering(218) 00:16:23.172 fused_ordering(219) 00:16:23.172 fused_ordering(220) 00:16:23.172 fused_ordering(221) 00:16:23.172 fused_ordering(222) 00:16:23.172 fused_ordering(223) 00:16:23.172 fused_ordering(224) 00:16:23.172 fused_ordering(225) 00:16:23.172 fused_ordering(226) 00:16:23.172 fused_ordering(227) 00:16:23.172 fused_ordering(228) 00:16:23.172 fused_ordering(229) 00:16:23.172 fused_ordering(230) 00:16:23.172 fused_ordering(231) 00:16:23.172 fused_ordering(232) 00:16:23.172 fused_ordering(233) 00:16:23.172 fused_ordering(234) 00:16:23.172 fused_ordering(235) 00:16:23.172 fused_ordering(236) 00:16:23.172 fused_ordering(237) 00:16:23.172 fused_ordering(238) 00:16:23.172 fused_ordering(239) 00:16:23.172 fused_ordering(240) 00:16:23.172 fused_ordering(241) 00:16:23.172 fused_ordering(242) 00:16:23.172 fused_ordering(243) 00:16:23.172 fused_ordering(244) 00:16:23.172 fused_ordering(245) 00:16:23.172 fused_ordering(246) 00:16:23.172 fused_ordering(247) 00:16:23.172 fused_ordering(248) 00:16:23.172 fused_ordering(249) 00:16:23.172 fused_ordering(250) 00:16:23.172 fused_ordering(251) 00:16:23.172 fused_ordering(252) 00:16:23.172 fused_ordering(253) 00:16:23.172 fused_ordering(254) 00:16:23.172 fused_ordering(255) 00:16:23.172 fused_ordering(256) 00:16:23.172 fused_ordering(257) 00:16:23.172 fused_ordering(258) 00:16:23.172 fused_ordering(259) 00:16:23.172 fused_ordering(260) 00:16:23.172 fused_ordering(261) 00:16:23.172 fused_ordering(262) 00:16:23.172 fused_ordering(263) 00:16:23.172 fused_ordering(264) 00:16:23.172 fused_ordering(265) 00:16:23.172 fused_ordering(266) 00:16:23.172 fused_ordering(267) 00:16:23.172 fused_ordering(268) 00:16:23.172 fused_ordering(269) 00:16:23.172 fused_ordering(270) 00:16:23.172 fused_ordering(271) 00:16:23.172 fused_ordering(272) 00:16:23.172 fused_ordering(273) 00:16:23.172 fused_ordering(274) 00:16:23.172 fused_ordering(275) 00:16:23.172 fused_ordering(276) 00:16:23.172 fused_ordering(277) 00:16:23.172 fused_ordering(278) 00:16:23.172 fused_ordering(279) 00:16:23.172 fused_ordering(280) 00:16:23.172 fused_ordering(281) 00:16:23.172 fused_ordering(282) 00:16:23.172 fused_ordering(283) 00:16:23.172 fused_ordering(284) 00:16:23.172 fused_ordering(285) 00:16:23.172 fused_ordering(286) 00:16:23.172 fused_ordering(287) 00:16:23.172 fused_ordering(288) 00:16:23.172 fused_ordering(289) 00:16:23.172 fused_ordering(290) 00:16:23.172 fused_ordering(291) 00:16:23.172 fused_ordering(292) 00:16:23.172 fused_ordering(293) 00:16:23.172 fused_ordering(294) 00:16:23.172 fused_ordering(295) 00:16:23.172 fused_ordering(296) 00:16:23.172 fused_ordering(297) 00:16:23.172 fused_ordering(298) 00:16:23.172 fused_ordering(299) 00:16:23.172 fused_ordering(300) 00:16:23.172 fused_ordering(301) 00:16:23.172 fused_ordering(302) 00:16:23.172 fused_ordering(303) 00:16:23.172 fused_ordering(304) 00:16:23.172 fused_ordering(305) 00:16:23.172 fused_ordering(306) 00:16:23.172 fused_ordering(307) 00:16:23.172 fused_ordering(308) 00:16:23.172 fused_ordering(309) 00:16:23.172 fused_ordering(310) 00:16:23.172 fused_ordering(311) 00:16:23.172 fused_ordering(312) 00:16:23.172 fused_ordering(313) 00:16:23.172 fused_ordering(314) 00:16:23.172 fused_ordering(315) 00:16:23.172 fused_ordering(316) 00:16:23.172 fused_ordering(317) 00:16:23.172 fused_ordering(318) 00:16:23.172 fused_ordering(319) 00:16:23.172 fused_ordering(320) 00:16:23.172 fused_ordering(321) 00:16:23.172 fused_ordering(322) 00:16:23.172 fused_ordering(323) 00:16:23.172 fused_ordering(324) 00:16:23.172 fused_ordering(325) 00:16:23.172 fused_ordering(326) 00:16:23.172 fused_ordering(327) 00:16:23.172 fused_ordering(328) 00:16:23.172 fused_ordering(329) 00:16:23.172 fused_ordering(330) 00:16:23.172 fused_ordering(331) 00:16:23.172 fused_ordering(332) 00:16:23.172 fused_ordering(333) 00:16:23.172 fused_ordering(334) 00:16:23.172 fused_ordering(335) 00:16:23.172 fused_ordering(336) 00:16:23.172 fused_ordering(337) 00:16:23.172 fused_ordering(338) 00:16:23.172 fused_ordering(339) 00:16:23.172 fused_ordering(340) 00:16:23.172 fused_ordering(341) 00:16:23.172 fused_ordering(342) 00:16:23.172 fused_ordering(343) 00:16:23.172 fused_ordering(344) 00:16:23.172 fused_ordering(345) 00:16:23.172 fused_ordering(346) 00:16:23.172 fused_ordering(347) 00:16:23.172 fused_ordering(348) 00:16:23.172 fused_ordering(349) 00:16:23.172 fused_ordering(350) 00:16:23.172 fused_ordering(351) 00:16:23.172 fused_ordering(352) 00:16:23.172 fused_ordering(353) 00:16:23.172 fused_ordering(354) 00:16:23.172 fused_ordering(355) 00:16:23.172 fused_ordering(356) 00:16:23.172 fused_ordering(357) 00:16:23.172 fused_ordering(358) 00:16:23.172 fused_ordering(359) 00:16:23.172 fused_ordering(360) 00:16:23.172 fused_ordering(361) 00:16:23.172 fused_ordering(362) 00:16:23.172 fused_ordering(363) 00:16:23.172 fused_ordering(364) 00:16:23.172 fused_ordering(365) 00:16:23.172 fused_ordering(366) 00:16:23.172 fused_ordering(367) 00:16:23.172 fused_ordering(368) 00:16:23.172 fused_ordering(369) 00:16:23.172 fused_ordering(370) 00:16:23.172 fused_ordering(371) 00:16:23.172 fused_ordering(372) 00:16:23.172 fused_ordering(373) 00:16:23.172 fused_ordering(374) 00:16:23.172 fused_ordering(375) 00:16:23.172 fused_ordering(376) 00:16:23.172 fused_ordering(377) 00:16:23.172 fused_ordering(378) 00:16:23.172 fused_ordering(379) 00:16:23.172 fused_ordering(380) 00:16:23.172 fused_ordering(381) 00:16:23.172 fused_ordering(382) 00:16:23.172 fused_ordering(383) 00:16:23.172 fused_ordering(384) 00:16:23.172 fused_ordering(385) 00:16:23.172 fused_ordering(386) 00:16:23.172 fused_ordering(387) 00:16:23.172 fused_ordering(388) 00:16:23.172 fused_ordering(389) 00:16:23.172 fused_ordering(390) 00:16:23.172 fused_ordering(391) 00:16:23.172 fused_ordering(392) 00:16:23.172 fused_ordering(393) 00:16:23.172 fused_ordering(394) 00:16:23.172 fused_ordering(395) 00:16:23.172 fused_ordering(396) 00:16:23.172 fused_ordering(397) 00:16:23.172 fused_ordering(398) 00:16:23.172 fused_ordering(399) 00:16:23.172 fused_ordering(400) 00:16:23.172 fused_ordering(401) 00:16:23.172 fused_ordering(402) 00:16:23.172 fused_ordering(403) 00:16:23.172 fused_ordering(404) 00:16:23.172 fused_ordering(405) 00:16:23.172 fused_ordering(406) 00:16:23.172 fused_ordering(407) 00:16:23.172 fused_ordering(408) 00:16:23.172 fused_ordering(409) 00:16:23.172 fused_ordering(410) 00:16:23.745 fused_ordering(411) 00:16:23.745 fused_ordering(412) 00:16:23.745 fused_ordering(413) 00:16:23.745 fused_ordering(414) 00:16:23.745 fused_ordering(415) 00:16:23.745 fused_ordering(416) 00:16:23.745 fused_ordering(417) 00:16:23.745 fused_ordering(418) 00:16:23.745 fused_ordering(419) 00:16:23.745 fused_ordering(420) 00:16:23.745 fused_ordering(421) 00:16:23.745 fused_ordering(422) 00:16:23.745 fused_ordering(423) 00:16:23.745 fused_ordering(424) 00:16:23.745 fused_ordering(425) 00:16:23.745 fused_ordering(426) 00:16:23.745 fused_ordering(427) 00:16:23.745 fused_ordering(428) 00:16:23.745 fused_ordering(429) 00:16:23.745 fused_ordering(430) 00:16:23.745 fused_ordering(431) 00:16:23.745 fused_ordering(432) 00:16:23.745 fused_ordering(433) 00:16:23.745 fused_ordering(434) 00:16:23.745 fused_ordering(435) 00:16:23.745 fused_ordering(436) 00:16:23.745 fused_ordering(437) 00:16:23.745 fused_ordering(438) 00:16:23.745 fused_ordering(439) 00:16:23.745 fused_ordering(440) 00:16:23.745 fused_ordering(441) 00:16:23.745 fused_ordering(442) 00:16:23.745 fused_ordering(443) 00:16:23.745 fused_ordering(444) 00:16:23.745 fused_ordering(445) 00:16:23.745 fused_ordering(446) 00:16:23.745 fused_ordering(447) 00:16:23.745 fused_ordering(448) 00:16:23.745 fused_ordering(449) 00:16:23.745 fused_ordering(450) 00:16:23.745 fused_ordering(451) 00:16:23.745 fused_ordering(452) 00:16:23.745 fused_ordering(453) 00:16:23.745 fused_ordering(454) 00:16:23.745 fused_ordering(455) 00:16:23.745 fused_ordering(456) 00:16:23.745 fused_ordering(457) 00:16:23.745 fused_ordering(458) 00:16:23.745 fused_ordering(459) 00:16:23.745 fused_ordering(460) 00:16:23.745 fused_ordering(461) 00:16:23.745 fused_ordering(462) 00:16:23.745 fused_ordering(463) 00:16:23.745 fused_ordering(464) 00:16:23.745 fused_ordering(465) 00:16:23.745 fused_ordering(466) 00:16:23.745 fused_ordering(467) 00:16:23.745 fused_ordering(468) 00:16:23.745 fused_ordering(469) 00:16:23.745 fused_ordering(470) 00:16:23.746 fused_ordering(471) 00:16:23.746 fused_ordering(472) 00:16:23.746 fused_ordering(473) 00:16:23.746 fused_ordering(474) 00:16:23.746 fused_ordering(475) 00:16:23.746 fused_ordering(476) 00:16:23.746 fused_ordering(477) 00:16:23.746 fused_ordering(478) 00:16:23.746 fused_ordering(479) 00:16:23.746 fused_ordering(480) 00:16:23.746 fused_ordering(481) 00:16:23.746 fused_ordering(482) 00:16:23.746 fused_ordering(483) 00:16:23.746 fused_ordering(484) 00:16:23.746 fused_ordering(485) 00:16:23.746 fused_ordering(486) 00:16:23.746 fused_ordering(487) 00:16:23.746 fused_ordering(488) 00:16:23.746 fused_ordering(489) 00:16:23.746 fused_ordering(490) 00:16:23.746 fused_ordering(491) 00:16:23.746 fused_ordering(492) 00:16:23.746 fused_ordering(493) 00:16:23.746 fused_ordering(494) 00:16:23.746 fused_ordering(495) 00:16:23.746 fused_ordering(496) 00:16:23.746 fused_ordering(497) 00:16:23.746 fused_ordering(498) 00:16:23.746 fused_ordering(499) 00:16:23.746 fused_ordering(500) 00:16:23.746 fused_ordering(501) 00:16:23.746 fused_ordering(502) 00:16:23.746 fused_ordering(503) 00:16:23.746 fused_ordering(504) 00:16:23.746 fused_ordering(505) 00:16:23.746 fused_ordering(506) 00:16:23.746 fused_ordering(507) 00:16:23.746 fused_ordering(508) 00:16:23.746 fused_ordering(509) 00:16:23.746 fused_ordering(510) 00:16:23.746 fused_ordering(511) 00:16:23.746 fused_ordering(512) 00:16:23.746 fused_ordering(513) 00:16:23.746 fused_ordering(514) 00:16:23.746 fused_ordering(515) 00:16:23.746 fused_ordering(516) 00:16:23.746 fused_ordering(517) 00:16:23.746 fused_ordering(518) 00:16:23.746 fused_ordering(519) 00:16:23.746 fused_ordering(520) 00:16:23.746 fused_ordering(521) 00:16:23.746 fused_ordering(522) 00:16:23.746 fused_ordering(523) 00:16:23.746 fused_ordering(524) 00:16:23.746 fused_ordering(525) 00:16:23.746 fused_ordering(526) 00:16:23.746 fused_ordering(527) 00:16:23.746 fused_ordering(528) 00:16:23.746 fused_ordering(529) 00:16:23.746 fused_ordering(530) 00:16:23.746 fused_ordering(531) 00:16:23.746 fused_ordering(532) 00:16:23.746 fused_ordering(533) 00:16:23.746 fused_ordering(534) 00:16:23.746 fused_ordering(535) 00:16:23.746 fused_ordering(536) 00:16:23.746 fused_ordering(537) 00:16:23.746 fused_ordering(538) 00:16:23.746 fused_ordering(539) 00:16:23.746 fused_ordering(540) 00:16:23.746 fused_ordering(541) 00:16:23.746 fused_ordering(542) 00:16:23.746 fused_ordering(543) 00:16:23.746 fused_ordering(544) 00:16:23.746 fused_ordering(545) 00:16:23.746 fused_ordering(546) 00:16:23.746 fused_ordering(547) 00:16:23.746 fused_ordering(548) 00:16:23.746 fused_ordering(549) 00:16:23.746 fused_ordering(550) 00:16:23.746 fused_ordering(551) 00:16:23.746 fused_ordering(552) 00:16:23.746 fused_ordering(553) 00:16:23.746 fused_ordering(554) 00:16:23.746 fused_ordering(555) 00:16:23.746 fused_ordering(556) 00:16:23.746 fused_ordering(557) 00:16:23.746 fused_ordering(558) 00:16:23.746 fused_ordering(559) 00:16:23.746 fused_ordering(560) 00:16:23.746 fused_ordering(561) 00:16:23.746 fused_ordering(562) 00:16:23.746 fused_ordering(563) 00:16:23.746 fused_ordering(564) 00:16:23.746 fused_ordering(565) 00:16:23.746 fused_ordering(566) 00:16:23.746 fused_ordering(567) 00:16:23.746 fused_ordering(568) 00:16:23.746 fused_ordering(569) 00:16:23.746 fused_ordering(570) 00:16:23.746 fused_ordering(571) 00:16:23.746 fused_ordering(572) 00:16:23.746 fused_ordering(573) 00:16:23.746 fused_ordering(574) 00:16:23.746 fused_ordering(575) 00:16:23.746 fused_ordering(576) 00:16:23.746 fused_ordering(577) 00:16:23.746 fused_ordering(578) 00:16:23.746 fused_ordering(579) 00:16:23.746 fused_ordering(580) 00:16:23.746 fused_ordering(581) 00:16:23.746 fused_ordering(582) 00:16:23.746 fused_ordering(583) 00:16:23.746 fused_ordering(584) 00:16:23.746 fused_ordering(585) 00:16:23.746 fused_ordering(586) 00:16:23.746 fused_ordering(587) 00:16:23.746 fused_ordering(588) 00:16:23.746 fused_ordering(589) 00:16:23.746 fused_ordering(590) 00:16:23.746 fused_ordering(591) 00:16:23.746 fused_ordering(592) 00:16:23.746 fused_ordering(593) 00:16:23.746 fused_ordering(594) 00:16:23.746 fused_ordering(595) 00:16:23.746 fused_ordering(596) 00:16:23.746 fused_ordering(597) 00:16:23.746 fused_ordering(598) 00:16:23.746 fused_ordering(599) 00:16:23.746 fused_ordering(600) 00:16:23.746 fused_ordering(601) 00:16:23.746 fused_ordering(602) 00:16:23.746 fused_ordering(603) 00:16:23.746 fused_ordering(604) 00:16:23.746 fused_ordering(605) 00:16:23.746 fused_ordering(606) 00:16:23.746 fused_ordering(607) 00:16:23.746 fused_ordering(608) 00:16:23.746 fused_ordering(609) 00:16:23.746 fused_ordering(610) 00:16:23.746 fused_ordering(611) 00:16:23.746 fused_ordering(612) 00:16:23.746 fused_ordering(613) 00:16:23.746 fused_ordering(614) 00:16:23.746 fused_ordering(615) 00:16:24.690 fused_ordering(616) 00:16:24.690 fused_ordering(617) 00:16:24.690 fused_ordering(618) 00:16:24.690 fused_ordering(619) 00:16:24.690 fused_ordering(620) 00:16:24.690 fused_ordering(621) 00:16:24.690 fused_ordering(622) 00:16:24.690 fused_ordering(623) 00:16:24.690 fused_ordering(624) 00:16:24.690 fused_ordering(625) 00:16:24.690 fused_ordering(626) 00:16:24.690 fused_ordering(627) 00:16:24.690 fused_ordering(628) 00:16:24.690 fused_ordering(629) 00:16:24.690 fused_ordering(630) 00:16:24.690 fused_ordering(631) 00:16:24.690 fused_ordering(632) 00:16:24.690 fused_ordering(633) 00:16:24.690 fused_ordering(634) 00:16:24.690 fused_ordering(635) 00:16:24.690 fused_ordering(636) 00:16:24.690 fused_ordering(637) 00:16:24.690 fused_ordering(638) 00:16:24.690 fused_ordering(639) 00:16:24.690 fused_ordering(640) 00:16:24.690 fused_ordering(641) 00:16:24.690 fused_ordering(642) 00:16:24.690 fused_ordering(643) 00:16:24.690 fused_ordering(644) 00:16:24.690 fused_ordering(645) 00:16:24.690 fused_ordering(646) 00:16:24.690 fused_ordering(647) 00:16:24.690 fused_ordering(648) 00:16:24.690 fused_ordering(649) 00:16:24.690 fused_ordering(650) 00:16:24.690 fused_ordering(651) 00:16:24.690 fused_ordering(652) 00:16:24.690 fused_ordering(653) 00:16:24.690 fused_ordering(654) 00:16:24.690 fused_ordering(655) 00:16:24.690 fused_ordering(656) 00:16:24.690 fused_ordering(657) 00:16:24.690 fused_ordering(658) 00:16:24.690 fused_ordering(659) 00:16:24.690 fused_ordering(660) 00:16:24.690 fused_ordering(661) 00:16:24.690 fused_ordering(662) 00:16:24.690 fused_ordering(663) 00:16:24.690 fused_ordering(664) 00:16:24.690 fused_ordering(665) 00:16:24.690 fused_ordering(666) 00:16:24.690 fused_ordering(667) 00:16:24.690 fused_ordering(668) 00:16:24.690 fused_ordering(669) 00:16:24.690 fused_ordering(670) 00:16:24.690 fused_ordering(671) 00:16:24.690 fused_ordering(672) 00:16:24.690 fused_ordering(673) 00:16:24.690 fused_ordering(674) 00:16:24.690 fused_ordering(675) 00:16:24.690 fused_ordering(676) 00:16:24.690 fused_ordering(677) 00:16:24.690 fused_ordering(678) 00:16:24.690 fused_ordering(679) 00:16:24.690 fused_ordering(680) 00:16:24.690 fused_ordering(681) 00:16:24.690 fused_ordering(682) 00:16:24.690 fused_ordering(683) 00:16:24.690 fused_ordering(684) 00:16:24.690 fused_ordering(685) 00:16:24.690 fused_ordering(686) 00:16:24.690 fused_ordering(687) 00:16:24.690 fused_ordering(688) 00:16:24.690 fused_ordering(689) 00:16:24.690 fused_ordering(690) 00:16:24.690 fused_ordering(691) 00:16:24.690 fused_ordering(692) 00:16:24.690 fused_ordering(693) 00:16:24.690 fused_ordering(694) 00:16:24.690 fused_ordering(695) 00:16:24.690 fused_ordering(696) 00:16:24.690 fused_ordering(697) 00:16:24.690 fused_ordering(698) 00:16:24.690 fused_ordering(699) 00:16:24.690 fused_ordering(700) 00:16:24.690 fused_ordering(701) 00:16:24.690 fused_ordering(702) 00:16:24.690 fused_ordering(703) 00:16:24.690 fused_ordering(704) 00:16:24.690 fused_ordering(705) 00:16:24.690 fused_ordering(706) 00:16:24.690 fused_ordering(707) 00:16:24.690 fused_ordering(708) 00:16:24.690 fused_ordering(709) 00:16:24.690 fused_ordering(710) 00:16:24.690 fused_ordering(711) 00:16:24.690 fused_ordering(712) 00:16:24.690 fused_ordering(713) 00:16:24.690 fused_ordering(714) 00:16:24.690 fused_ordering(715) 00:16:24.690 fused_ordering(716) 00:16:24.690 fused_ordering(717) 00:16:24.690 fused_ordering(718) 00:16:24.690 fused_ordering(719) 00:16:24.690 fused_ordering(720) 00:16:24.690 fused_ordering(721) 00:16:24.690 fused_ordering(722) 00:16:24.690 fused_ordering(723) 00:16:24.690 fused_ordering(724) 00:16:24.690 fused_ordering(725) 00:16:24.690 fused_ordering(726) 00:16:24.690 fused_ordering(727) 00:16:24.690 fused_ordering(728) 00:16:24.690 fused_ordering(729) 00:16:24.690 fused_ordering(730) 00:16:24.690 fused_ordering(731) 00:16:24.690 fused_ordering(732) 00:16:24.690 fused_ordering(733) 00:16:24.690 fused_ordering(734) 00:16:24.690 fused_ordering(735) 00:16:24.690 fused_ordering(736) 00:16:24.690 fused_ordering(737) 00:16:24.690 fused_ordering(738) 00:16:24.690 fused_ordering(739) 00:16:24.690 fused_ordering(740) 00:16:24.690 fused_ordering(741) 00:16:24.690 fused_ordering(742) 00:16:24.690 fused_ordering(743) 00:16:24.690 fused_ordering(744) 00:16:24.690 fused_ordering(745) 00:16:24.690 fused_ordering(746) 00:16:24.690 fused_ordering(747) 00:16:24.690 fused_ordering(748) 00:16:24.690 fused_ordering(749) 00:16:24.690 fused_ordering(750) 00:16:24.690 fused_ordering(751) 00:16:24.690 fused_ordering(752) 00:16:24.690 fused_ordering(753) 00:16:24.690 fused_ordering(754) 00:16:24.690 fused_ordering(755) 00:16:24.690 fused_ordering(756) 00:16:24.690 fused_ordering(757) 00:16:24.690 fused_ordering(758) 00:16:24.690 fused_ordering(759) 00:16:24.690 fused_ordering(760) 00:16:24.690 fused_ordering(761) 00:16:24.690 fused_ordering(762) 00:16:24.690 fused_ordering(763) 00:16:24.690 fused_ordering(764) 00:16:24.690 fused_ordering(765) 00:16:24.690 fused_ordering(766) 00:16:24.690 fused_ordering(767) 00:16:24.690 fused_ordering(768) 00:16:24.690 fused_ordering(769) 00:16:24.690 fused_ordering(770) 00:16:24.690 fused_ordering(771) 00:16:24.690 fused_ordering(772) 00:16:24.690 fused_ordering(773) 00:16:24.690 fused_ordering(774) 00:16:24.690 fused_ordering(775) 00:16:24.690 fused_ordering(776) 00:16:24.690 fused_ordering(777) 00:16:24.690 fused_ordering(778) 00:16:24.690 fused_ordering(779) 00:16:24.690 fused_ordering(780) 00:16:24.690 fused_ordering(781) 00:16:24.690 fused_ordering(782) 00:16:24.690 fused_ordering(783) 00:16:24.690 fused_ordering(784) 00:16:24.690 fused_ordering(785) 00:16:24.690 fused_ordering(786) 00:16:24.690 fused_ordering(787) 00:16:24.690 fused_ordering(788) 00:16:24.690 fused_ordering(789) 00:16:24.690 fused_ordering(790) 00:16:24.690 fused_ordering(791) 00:16:24.690 fused_ordering(792) 00:16:24.690 fused_ordering(793) 00:16:24.690 fused_ordering(794) 00:16:24.690 fused_ordering(795) 00:16:24.690 fused_ordering(796) 00:16:24.690 fused_ordering(797) 00:16:24.690 fused_ordering(798) 00:16:24.690 fused_ordering(799) 00:16:24.690 fused_ordering(800) 00:16:24.690 fused_ordering(801) 00:16:24.690 fused_ordering(802) 00:16:24.690 fused_ordering(803) 00:16:24.690 fused_ordering(804) 00:16:24.690 fused_ordering(805) 00:16:24.690 fused_ordering(806) 00:16:24.690 fused_ordering(807) 00:16:24.690 fused_ordering(808) 00:16:24.690 fused_ordering(809) 00:16:24.690 fused_ordering(810) 00:16:24.690 fused_ordering(811) 00:16:24.690 fused_ordering(812) 00:16:24.690 fused_ordering(813) 00:16:24.690 fused_ordering(814) 00:16:24.690 fused_ordering(815) 00:16:24.690 fused_ordering(816) 00:16:24.690 fused_ordering(817) 00:16:24.690 fused_ordering(818) 00:16:24.690 fused_ordering(819) 00:16:24.690 fused_ordering(820) 00:16:25.263 fused_ordering(821) 00:16:25.263 fused_ordering(822) 00:16:25.263 fused_ordering(823) 00:16:25.263 fused_ordering(824) 00:16:25.263 fused_ordering(825) 00:16:25.263 fused_ordering(826) 00:16:25.263 fused_ordering(827) 00:16:25.263 fused_ordering(828) 00:16:25.263 fused_ordering(829) 00:16:25.263 fused_ordering(830) 00:16:25.263 fused_ordering(831) 00:16:25.263 fused_ordering(832) 00:16:25.263 fused_ordering(833) 00:16:25.263 fused_ordering(834) 00:16:25.263 fused_ordering(835) 00:16:25.263 fused_ordering(836) 00:16:25.263 fused_ordering(837) 00:16:25.263 fused_ordering(838) 00:16:25.263 fused_ordering(839) 00:16:25.263 fused_ordering(840) 00:16:25.263 fused_ordering(841) 00:16:25.263 fused_ordering(842) 00:16:25.263 fused_ordering(843) 00:16:25.263 fused_ordering(844) 00:16:25.263 fused_ordering(845) 00:16:25.263 fused_ordering(846) 00:16:25.263 fused_ordering(847) 00:16:25.263 fused_ordering(848) 00:16:25.263 fused_ordering(849) 00:16:25.263 fused_ordering(850) 00:16:25.263 fused_ordering(851) 00:16:25.263 fused_ordering(852) 00:16:25.263 fused_ordering(853) 00:16:25.263 fused_ordering(854) 00:16:25.263 fused_ordering(855) 00:16:25.263 fused_ordering(856) 00:16:25.263 fused_ordering(857) 00:16:25.263 fused_ordering(858) 00:16:25.263 fused_ordering(859) 00:16:25.263 fused_ordering(860) 00:16:25.263 fused_ordering(861) 00:16:25.263 fused_ordering(862) 00:16:25.263 fused_ordering(863) 00:16:25.263 fused_ordering(864) 00:16:25.263 fused_ordering(865) 00:16:25.263 fused_ordering(866) 00:16:25.263 fused_ordering(867) 00:16:25.263 fused_ordering(868) 00:16:25.263 fused_ordering(869) 00:16:25.263 fused_ordering(870) 00:16:25.263 fused_ordering(871) 00:16:25.263 fused_ordering(872) 00:16:25.263 fused_ordering(873) 00:16:25.263 fused_ordering(874) 00:16:25.263 fused_ordering(875) 00:16:25.263 fused_ordering(876) 00:16:25.263 fused_ordering(877) 00:16:25.263 fused_ordering(878) 00:16:25.263 fused_ordering(879) 00:16:25.264 fused_ordering(880) 00:16:25.264 fused_ordering(881) 00:16:25.264 fused_ordering(882) 00:16:25.264 fused_ordering(883) 00:16:25.264 fused_ordering(884) 00:16:25.264 fused_ordering(885) 00:16:25.264 fused_ordering(886) 00:16:25.264 fused_ordering(887) 00:16:25.264 fused_ordering(888) 00:16:25.264 fused_ordering(889) 00:16:25.264 fused_ordering(890) 00:16:25.264 fused_ordering(891) 00:16:25.264 fused_ordering(892) 00:16:25.264 fused_ordering(893) 00:16:25.264 fused_ordering(894) 00:16:25.264 fused_ordering(895) 00:16:25.264 fused_ordering(896) 00:16:25.264 fused_ordering(897) 00:16:25.264 fused_ordering(898) 00:16:25.264 fused_ordering(899) 00:16:25.264 fused_ordering(900) 00:16:25.264 fused_ordering(901) 00:16:25.264 fused_ordering(902) 00:16:25.264 fused_ordering(903) 00:16:25.264 fused_ordering(904) 00:16:25.264 fused_ordering(905) 00:16:25.264 fused_ordering(906) 00:16:25.264 fused_ordering(907) 00:16:25.264 fused_ordering(908) 00:16:25.264 fused_ordering(909) 00:16:25.264 fused_ordering(910) 00:16:25.264 fused_ordering(911) 00:16:25.264 fused_ordering(912) 00:16:25.264 fused_ordering(913) 00:16:25.264 fused_ordering(914) 00:16:25.264 fused_ordering(915) 00:16:25.264 fused_ordering(916) 00:16:25.264 fused_ordering(917) 00:16:25.264 fused_ordering(918) 00:16:25.264 fused_ordering(919) 00:16:25.264 fused_ordering(920) 00:16:25.264 fused_ordering(921) 00:16:25.264 fused_ordering(922) 00:16:25.264 fused_ordering(923) 00:16:25.264 fused_ordering(924) 00:16:25.264 fused_ordering(925) 00:16:25.264 fused_ordering(926) 00:16:25.264 fused_ordering(927) 00:16:25.264 fused_ordering(928) 00:16:25.264 fused_ordering(929) 00:16:25.264 fused_ordering(930) 00:16:25.264 fused_ordering(931) 00:16:25.264 fused_ordering(932) 00:16:25.264 fused_ordering(933) 00:16:25.264 fused_ordering(934) 00:16:25.264 fused_ordering(935) 00:16:25.264 fused_ordering(936) 00:16:25.264 fused_ordering(937) 00:16:25.264 fused_ordering(938) 00:16:25.264 fused_ordering(939) 00:16:25.264 fused_ordering(940) 00:16:25.264 fused_ordering(941) 00:16:25.264 fused_ordering(942) 00:16:25.264 fused_ordering(943) 00:16:25.264 fused_ordering(944) 00:16:25.264 fused_ordering(945) 00:16:25.264 fused_ordering(946) 00:16:25.264 fused_ordering(947) 00:16:25.264 fused_ordering(948) 00:16:25.264 fused_ordering(949) 00:16:25.264 fused_ordering(950) 00:16:25.264 fused_ordering(951) 00:16:25.264 fused_ordering(952) 00:16:25.264 fused_ordering(953) 00:16:25.264 fused_ordering(954) 00:16:25.264 fused_ordering(955) 00:16:25.264 fused_ordering(956) 00:16:25.264 fused_ordering(957) 00:16:25.264 fused_ordering(958) 00:16:25.264 fused_ordering(959) 00:16:25.264 fused_ordering(960) 00:16:25.264 fused_ordering(961) 00:16:25.264 fused_ordering(962) 00:16:25.264 fused_ordering(963) 00:16:25.264 fused_ordering(964) 00:16:25.264 fused_ordering(965) 00:16:25.264 fused_ordering(966) 00:16:25.264 fused_ordering(967) 00:16:25.264 fused_ordering(968) 00:16:25.264 fused_ordering(969) 00:16:25.264 fused_ordering(970) 00:16:25.264 fused_ordering(971) 00:16:25.264 fused_ordering(972) 00:16:25.264 fused_ordering(973) 00:16:25.264 fused_ordering(974) 00:16:25.264 fused_ordering(975) 00:16:25.264 fused_ordering(976) 00:16:25.264 fused_ordering(977) 00:16:25.264 fused_ordering(978) 00:16:25.264 fused_ordering(979) 00:16:25.264 fused_ordering(980) 00:16:25.264 fused_ordering(981) 00:16:25.264 fused_ordering(982) 00:16:25.264 fused_ordering(983) 00:16:25.264 fused_ordering(984) 00:16:25.264 fused_ordering(985) 00:16:25.264 fused_ordering(986) 00:16:25.264 fused_ordering(987) 00:16:25.264 fused_ordering(988) 00:16:25.264 fused_ordering(989) 00:16:25.264 fused_ordering(990) 00:16:25.264 fused_ordering(991) 00:16:25.264 fused_ordering(992) 00:16:25.264 fused_ordering(993) 00:16:25.264 fused_ordering(994) 00:16:25.264 fused_ordering(995) 00:16:25.264 fused_ordering(996) 00:16:25.264 fused_ordering(997) 00:16:25.264 fused_ordering(998) 00:16:25.264 fused_ordering(999) 00:16:25.264 fused_ordering(1000) 00:16:25.264 fused_ordering(1001) 00:16:25.264 fused_ordering(1002) 00:16:25.264 fused_ordering(1003) 00:16:25.264 fused_ordering(1004) 00:16:25.264 fused_ordering(1005) 00:16:25.264 fused_ordering(1006) 00:16:25.264 fused_ordering(1007) 00:16:25.264 fused_ordering(1008) 00:16:25.264 fused_ordering(1009) 00:16:25.264 fused_ordering(1010) 00:16:25.264 fused_ordering(1011) 00:16:25.264 fused_ordering(1012) 00:16:25.264 fused_ordering(1013) 00:16:25.264 fused_ordering(1014) 00:16:25.264 fused_ordering(1015) 00:16:25.264 fused_ordering(1016) 00:16:25.264 fused_ordering(1017) 00:16:25.264 fused_ordering(1018) 00:16:25.264 fused_ordering(1019) 00:16:25.264 fused_ordering(1020) 00:16:25.264 fused_ordering(1021) 00:16:25.264 fused_ordering(1022) 00:16:25.264 fused_ordering(1023) 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.264 rmmod nvme_tcp 00:16:25.264 rmmod nvme_fabrics 00:16:25.264 rmmod nvme_keyring 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 54151 ']' 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 54151 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 54151 ']' 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 54151 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.264 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 54151 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 54151' 00:16:25.526 killing process with pid 54151 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 54151 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 54151 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.526 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.079 00:16:28.079 real 0m13.601s 00:16:28.079 user 0m7.596s 00:16:28.079 sys 0m7.470s 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:28.079 ************************************ 00:16:28.079 END TEST nvmf_fused_ordering 00:16:28.079 ************************************ 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.079 ************************************ 00:16:28.079 START TEST nvmf_ns_masking 00:16:28.079 ************************************ 00:16:28.079 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:28.079 * Looking for test storage... 00:16:28.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.079 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2524e4d3-0865-49ef-b9c2-5293c2315acc 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b7a16a4f-3cc4-4117-87a2-583cee7cc9d4 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0f19b63a-1862-4e88-a2c7-45d0e1bdcd2f 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.080 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:34.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.705 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:34.706 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:34.706 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:34.706 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.706 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.706 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.706 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.706 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:16:34.968 00:16:34.968 --- 10.0.0.2 ping statistics --- 00:16:34.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.968 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:16:34.968 00:16:34.968 --- 10.0.0.1 ping statistics --- 00:16:34.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.968 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=59177 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 59177 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 59177 ']' 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.968 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:34.968 [2024-07-25 07:22:42.185998] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:16:34.968 [2024-07-25 07:22:42.186047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.968 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.968 [2024-07-25 07:22:42.253395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.968 [2024-07-25 07:22:42.316586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.968 [2024-07-25 07:22:42.316624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.968 [2024-07-25 07:22:42.316636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.968 [2024-07-25 07:22:42.316642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.968 [2024-07-25 07:22:42.316648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.968 [2024-07-25 07:22:42.316667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.229 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.229 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:35.229 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.229 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.229 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:35.229 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.229 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:35.229 [2024-07-25 07:22:42.582146] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.491 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:35.491 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:35.491 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:35.491 Malloc1 00:16:35.491 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:35.752 Malloc2 00:16:35.752 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:36.013 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:36.013 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.272 [2024-07-25 07:22:43.454848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.272 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:36.272 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f19b63a-1862-4e88-a2c7-45d0e1bdcd2f -a 10.0.0.2 -s 4420 -i 4 00:16:36.272 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.272 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:36.272 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.272 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:36.272 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:38.817 [ 0]:0x1 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4ee73e19b5d4f469bfb9943cbf74b42 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4ee73e19b5d4f469bfb9943cbf74b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:38.817 [ 0]:0x1 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4ee73e19b5d4f469bfb9943cbf74b42 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4ee73e19b5d4f469bfb9943cbf74b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:38.817 [ 1]:0x2 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:38.817 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:38.817 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9712c0960c104c41bc4689a498bc1ccb 00:16:38.817 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9712c0960c104c41bc4689a498bc1ccb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:38.817 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:38.817 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.817 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.078 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:39.078 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:39.078 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f19b63a-1862-4e88-a2c7-45d0e1bdcd2f -a 10.0.0.2 -s 4420 -i 4 00:16:39.338 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:39.338 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.338 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.338 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:39.338 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:39.338 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:41.252 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.513 [ 0]:0x2 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9712c0960c104c41bc4689a498bc1ccb 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9712c0960c104c41bc4689a498bc1ccb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.513 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.774 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:41.774 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.774 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.774 [ 0]:0x1 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4ee73e19b5d4f469bfb9943cbf74b42 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4ee73e19b5d4f469bfb9943cbf74b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.774 [ 1]:0x2 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9712c0960c104c41bc4689a498bc1ccb 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9712c0960c104c41bc4689a498bc1ccb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.774 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:42.047 [ 0]:0x2 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:42.047 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:42.308 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9712c0960c104c41bc4689a498bc1ccb 00:16:42.308 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9712c0960c104c41bc4689a498bc1ccb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:42.308 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:42.308 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.308 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:42.308 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:42.308 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f19b63a-1862-4e88-a2c7-45d0e1bdcd2f -a 10.0.0.2 -s 4420 -i 4 00:16:42.570 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:42.570 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:42.570 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.570 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:42.570 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:42.570 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:44.484 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:44.745 [ 0]:0x1 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c4ee73e19b5d4f469bfb9943cbf74b42 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c4ee73e19b5d4f469bfb9943cbf74b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:44.745 [ 1]:0x2 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9712c0960c104c41bc4689a498bc1ccb 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9712c0960c104c41bc4689a498bc1ccb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.745 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:45.006 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:45.006 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:45.006 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.006 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:45.006 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:45.007 [ 0]:0x2 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9712c0960c104c41bc4689a498bc1ccb 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9712c0960c104c41bc4689a498bc1ccb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:45.007 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:45.269 [2024-07-25 07:22:52.404622] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:45.269 request: 00:16:45.269 { 00:16:45.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.269 "nsid": 2, 00:16:45.269 "host": "nqn.2016-06.io.spdk:host1", 00:16:45.269 "method": "nvmf_ns_remove_host", 00:16:45.269 "req_id": 1 00:16:45.269 } 00:16:45.269 Got JSON-RPC error response 00:16:45.269 response: 00:16:45.269 { 00:16:45.269 "code": -32602, 00:16:45.269 "message": "Invalid parameters" 00:16:45.269 } 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:45.269 [ 0]:0x2 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9712c0960c104c41bc4689a498bc1ccb 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9712c0960c104c41bc4689a498bc1ccb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=61340 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 61340 /var/tmp/host.sock 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 61340 ']' 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:45.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.269 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:45.530 [2024-07-25 07:22:52.662595] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:16:45.530 [2024-07-25 07:22:52.662650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61340 ] 00:16:45.530 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.530 [2024-07-25 07:22:52.738480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.530 [2024-07-25 07:22:52.803133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.105 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.105 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:46.105 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.367 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.367 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2524e4d3-0865-49ef-b9c2-5293c2315acc 00:16:46.367 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:46.367 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2524E4D3086549EFB9C25293C2315ACC -i 00:16:46.628 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b7a16a4f-3cc4-4117-87a2-583cee7cc9d4 00:16:46.628 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:46.628 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B7A16A4F3CC4411787A2583CEE7CC9D4 -i 00:16:46.889 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.889 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:47.150 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:47.150 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:47.410 nvme0n1 00:16:47.671 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:47.671 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:47.671 nvme1n2 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:47.932 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2524e4d3-0865-49ef-b9c2-5293c2315acc == \2\5\2\4\e\4\d\3\-\0\8\6\5\-\4\9\e\f\-\b\9\c\2\-\5\2\9\3\c\2\3\1\5\a\c\c ]] 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b7a16a4f-3cc4-4117-87a2-583cee7cc9d4 == \b\7\a\1\6\a\4\f\-\3\c\c\4\-\4\1\1\7\-\8\7\a\2\-\5\8\3\c\e\e\7\c\c\9\d\4 ]] 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 61340 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 61340 ']' 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 61340 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.192 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61340 00:16:48.453 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:48.453 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:48.453 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61340' 00:16:48.453 killing process with pid 61340 00:16:48.453 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 61340 00:16:48.453 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 61340 00:16:48.453 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.713 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.713 rmmod nvme_tcp 00:16:48.713 rmmod nvme_fabrics 00:16:48.713 rmmod nvme_keyring 00:16:48.713 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.713 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:48.713 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:48.713 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 59177 ']' 00:16:48.713 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 59177 00:16:48.714 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 59177 ']' 00:16:48.714 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 59177 00:16:48.714 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:48.714 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.714 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59177 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59177' 00:16:48.975 killing process with pid 59177 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 59177 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 59177 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.975 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:51.523 00:16:51.523 real 0m23.404s 00:16:51.523 user 0m23.466s 00:16:51.523 sys 0m7.231s 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:51.523 ************************************ 00:16:51.523 END TEST nvmf_ns_masking 00:16:51.523 ************************************ 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.523 ************************************ 00:16:51.523 START TEST nvmf_nvme_cli 00:16:51.523 ************************************ 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:51.523 * Looking for test storage... 00:16:51.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.523 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:58.159 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:58.159 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:58.159 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:58.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.159 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.160 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.160 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.160 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.160 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.160 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:16:58.421 00:16:58.421 --- 10.0.0.2 ping statistics --- 00:16:58.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.421 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:16:58.421 00:16:58.421 --- 10.0.0.1 ping statistics --- 00:16:58.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.421 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=66085 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 66085 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 66085 ']' 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.421 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:58.682 [2024-07-25 07:23:05.823905] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:16:58.682 [2024-07-25 07:23:05.823955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.682 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.682 [2024-07-25 07:23:05.891272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.682 [2024-07-25 07:23:05.960291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.682 [2024-07-25 07:23:05.960330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.682 [2024-07-25 07:23:05.960337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.682 [2024-07-25 07:23:05.960344] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.682 [2024-07-25 07:23:05.960349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.682 [2024-07-25 07:23:05.960417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.682 [2024-07-25 07:23:05.960533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.682 [2024-07-25 07:23:05.960689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.682 [2024-07-25 07:23:05.960691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.255 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.255 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:59.255 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.255 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:59.255 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 [2024-07-25 07:23:06.649192] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 Malloc0 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 Malloc1 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.516 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.517 [2024-07-25 07:23:06.739056] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.517 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:59.778 00:16:59.778 Discovery Log Number of Records 2, Generation counter 2 00:16:59.778 =====Discovery Log Entry 0====== 00:16:59.778 trtype: tcp 00:16:59.778 adrfam: ipv4 00:16:59.778 subtype: current discovery subsystem 00:16:59.778 treq: not required 00:16:59.778 portid: 0 00:16:59.778 trsvcid: 4420 00:16:59.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:59.778 traddr: 10.0.0.2 00:16:59.778 eflags: explicit discovery connections, duplicate discovery information 00:16:59.778 sectype: none 00:16:59.778 =====Discovery Log Entry 1====== 00:16:59.778 trtype: tcp 00:16:59.778 adrfam: ipv4 00:16:59.778 subtype: nvme subsystem 00:16:59.778 treq: not required 00:16:59.778 portid: 0 00:16:59.778 trsvcid: 4420 00:16:59.778 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:59.778 traddr: 10.0.0.2 00:16:59.778 eflags: none 00:16:59.778 sectype: none 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:59.778 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:01.168 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:01.168 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:01.168 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:01.169 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:01.169 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:01.169 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.084 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:03.345 /dev/nvme0n1 ]] 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.345 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:03.606 07:23:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.867 rmmod nvme_tcp 00:17:03.867 rmmod nvme_fabrics 00:17:03.867 rmmod nvme_keyring 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 66085 ']' 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 66085 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 66085 ']' 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 66085 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66085 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66085' 00:17:03.867 killing process with pid 66085 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 66085 00:17:03.867 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 66085 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.128 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.675 00:17:06.675 real 0m15.019s 00:17:06.675 user 0m23.456s 00:17:06.675 sys 0m6.040s 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:06.675 ************************************ 00:17:06.675 END TEST nvmf_nvme_cli 00:17:06.675 ************************************ 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.675 ************************************ 00:17:06.675 START TEST nvmf_vfio_user 00:17:06.675 ************************************ 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:06.675 * Looking for test storage... 00:17:06.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:06.675 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=67853 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 67853' 00:17:06.676 Process pid: 67853 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 67853 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 67853 ']' 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.676 07:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:06.676 [2024-07-25 07:23:13.660940] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:17:06.676 [2024-07-25 07:23:13.661011] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.676 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.676 [2024-07-25 07:23:13.728089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.676 [2024-07-25 07:23:13.803443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.676 [2024-07-25 07:23:13.803485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.676 [2024-07-25 07:23:13.803492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.676 [2024-07-25 07:23:13.803498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.676 [2024-07-25 07:23:13.803504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.676 [2024-07-25 07:23:13.803654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.676 [2024-07-25 07:23:13.803779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.676 [2024-07-25 07:23:13.803941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.676 [2024-07-25 07:23:13.803942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.247 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.247 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:07.247 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:08.190 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:08.450 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:08.450 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:08.450 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.450 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:08.450 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:08.450 Malloc1 00:17:08.450 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:08.712 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:08.973 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:08.973 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.973 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:08.973 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:09.235 Malloc2 00:17:09.235 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:09.496 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:09.496 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:09.758 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:09.758 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:09.758 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:09.758 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:09.758 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:09.758 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:09.758 [2024-07-25 07:23:17.035021] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:17:09.758 [2024-07-25 07:23:17.035061] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68545 ] 00:17:09.758 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.758 [2024-07-25 07:23:17.068033] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:09.758 [2024-07-25 07:23:17.074497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:09.758 [2024-07-25 07:23:17.074517] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7b3caa6000 00:17:09.758 [2024-07-25 07:23:17.078207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.758 [2024-07-25 07:23:17.078514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.758 [2024-07-25 07:23:17.079520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.758 [2024-07-25 07:23:17.080531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.758 [2024-07-25 07:23:17.081530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.758 [2024-07-25 07:23:17.082537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.759 [2024-07-25 07:23:17.083546] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:09.759 [2024-07-25 07:23:17.084551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:09.759 [2024-07-25 07:23:17.085562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:09.759 [2024-07-25 07:23:17.085571] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7b3ca9b000 00:17:09.759 [2024-07-25 07:23:17.086897] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:09.759 [2024-07-25 07:23:17.106816] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:09.759 [2024-07-25 07:23:17.106839] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:09.759 [2024-07-25 07:23:17.109690] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:09.759 [2024-07-25 07:23:17.109736] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:09.759 [2024-07-25 07:23:17.109824] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:09.759 [2024-07-25 07:23:17.109839] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:09.759 [2024-07-25 07:23:17.109844] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:09.759 [2024-07-25 07:23:17.110684] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:09.759 [2024-07-25 07:23:17.110694] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:09.759 [2024-07-25 07:23:17.110701] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:09.759 [2024-07-25 07:23:17.111692] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:09.759 [2024-07-25 07:23:17.111700] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:09.759 [2024-07-25 07:23:17.111707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:09.759 [2024-07-25 07:23:17.112698] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:09.759 [2024-07-25 07:23:17.112707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:09.759 [2024-07-25 07:23:17.113705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:09.759 [2024-07-25 07:23:17.113713] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:09.759 [2024-07-25 07:23:17.113718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:09.759 [2024-07-25 07:23:17.113724] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:09.759 [2024-07-25 07:23:17.113830] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:09.759 [2024-07-25 07:23:17.113834] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:09.759 [2024-07-25 07:23:17.113840] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:09.759 [2024-07-25 07:23:17.114708] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:09.759 [2024-07-25 07:23:17.115715] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:09.759 [2024-07-25 07:23:17.116724] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:09.759 [2024-07-25 07:23:17.117719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.759 [2024-07-25 07:23:17.117780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:09.759 [2024-07-25 07:23:17.118734] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:09.759 [2024-07-25 07:23:17.118742] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:09.759 [2024-07-25 07:23:17.118749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.118770] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:09.759 [2024-07-25 07:23:17.118778] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.118792] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.759 [2024-07-25 07:23:17.118796] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.759 [2024-07-25 07:23:17.118800] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.759 [2024-07-25 07:23:17.118814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.759 [2024-07-25 07:23:17.118849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:09.759 [2024-07-25 07:23:17.118858] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:09.759 [2024-07-25 07:23:17.118862] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:09.759 [2024-07-25 07:23:17.118867] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:09.759 [2024-07-25 07:23:17.118871] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:09.759 [2024-07-25 07:23:17.118876] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:09.759 [2024-07-25 07:23:17.118880] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:09.759 [2024-07-25 07:23:17.118885] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.118892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.118904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:09.759 [2024-07-25 07:23:17.118915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:09.759 [2024-07-25 07:23:17.118930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.759 [2024-07-25 07:23:17.118939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.759 [2024-07-25 07:23:17.118947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.759 [2024-07-25 07:23:17.118955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:09.759 [2024-07-25 07:23:17.118960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.118968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.118978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:09.759 [2024-07-25 07:23:17.118987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:09.759 [2024-07-25 07:23:17.118992] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:09.759 [2024-07-25 07:23:17.118997] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.119005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.119011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.119020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:09.759 [2024-07-25 07:23:17.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:09.759 [2024-07-25 07:23:17.119093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.119101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.119109] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:09.759 [2024-07-25 07:23:17.119113] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:09.759 [2024-07-25 07:23:17.119117] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.759 [2024-07-25 07:23:17.119123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:09.759 [2024-07-25 07:23:17.119132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:09.759 [2024-07-25 07:23:17.119144] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:09.759 [2024-07-25 07:23:17.119152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.119160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:09.759 [2024-07-25 07:23:17.119167] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.759 [2024-07-25 07:23:17.119171] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.759 [2024-07-25 07:23:17.119174] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.759 [2024-07-25 07:23:17.119180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119215] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119229] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:09.760 [2024-07-25 07:23:17.119234] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.760 [2024-07-25 07:23:17.119237] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.760 [2024-07-25 07:23:17.119245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119268] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119276] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119298] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:09.760 [2024-07-25 07:23:17.119303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:09.760 [2024-07-25 07:23:17.119308] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:09.760 [2024-07-25 07:23:17.119326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119411] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:09.760 [2024-07-25 07:23:17.119416] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:09.760 [2024-07-25 07:23:17.119419] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:09.760 [2024-07-25 07:23:17.119423] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:09.760 [2024-07-25 07:23:17.119426] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:09.760 [2024-07-25 07:23:17.119432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:09.760 [2024-07-25 07:23:17.119440] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:09.760 [2024-07-25 07:23:17.119446] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:09.760 [2024-07-25 07:23:17.119450] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.760 [2024-07-25 07:23:17.119455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119463] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:09.760 [2024-07-25 07:23:17.119467] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:09.760 [2024-07-25 07:23:17.119470] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.760 [2024-07-25 07:23:17.119476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119484] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:09.760 [2024-07-25 07:23:17.119488] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:09.760 [2024-07-25 07:23:17.119491] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:09.760 [2024-07-25 07:23:17.119497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:09.760 [2024-07-25 07:23:17.119504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:09.760 [2024-07-25 07:23:17.119534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:09.760 ===================================================== 00:17:09.760 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:09.760 ===================================================== 00:17:09.760 Controller Capabilities/Features 00:17:09.760 ================================ 00:17:09.760 Vendor ID: 4e58 00:17:09.760 Subsystem Vendor ID: 4e58 00:17:09.760 Serial Number: SPDK1 00:17:09.760 Model Number: SPDK bdev Controller 00:17:09.760 Firmware Version: 24.09 00:17:09.760 Recommended Arb Burst: 6 00:17:09.760 IEEE OUI Identifier: 8d 6b 50 00:17:09.760 Multi-path I/O 00:17:09.760 May have multiple subsystem ports: Yes 00:17:09.760 May have multiple controllers: Yes 00:17:09.760 Associated with SR-IOV VF: No 00:17:09.760 Max Data Transfer Size: 131072 00:17:09.760 Max Number of Namespaces: 32 00:17:09.760 Max Number of I/O Queues: 127 00:17:09.760 NVMe Specification Version (VS): 1.3 00:17:09.760 NVMe Specification Version (Identify): 1.3 00:17:09.760 Maximum Queue Entries: 256 00:17:09.760 Contiguous Queues Required: Yes 00:17:09.760 Arbitration Mechanisms Supported 00:17:09.760 Weighted Round Robin: Not Supported 00:17:09.760 Vendor Specific: Not Supported 00:17:09.760 Reset Timeout: 15000 ms 00:17:09.760 Doorbell Stride: 4 bytes 00:17:09.760 NVM Subsystem Reset: Not Supported 00:17:09.760 Command Sets Supported 00:17:09.760 NVM Command Set: Supported 00:17:09.760 Boot Partition: Not Supported 00:17:09.760 Memory Page Size Minimum: 4096 bytes 00:17:09.760 Memory Page Size Maximum: 4096 bytes 00:17:09.760 Persistent Memory Region: Not Supported 00:17:09.760 Optional Asynchronous Events Supported 00:17:09.760 Namespace Attribute Notices: Supported 00:17:09.760 Firmware Activation Notices: Not Supported 00:17:09.760 ANA Change Notices: Not Supported 00:17:09.760 PLE Aggregate Log Change Notices: Not Supported 00:17:09.760 LBA Status Info Alert Notices: Not Supported 00:17:09.760 EGE Aggregate Log Change Notices: Not Supported 00:17:09.760 Normal NVM Subsystem Shutdown event: Not Supported 00:17:09.760 Zone Descriptor Change Notices: Not Supported 00:17:09.760 Discovery Log Change Notices: Not Supported 00:17:09.760 Controller Attributes 00:17:09.760 128-bit Host Identifier: Supported 00:17:09.760 Non-Operational Permissive Mode: Not Supported 00:17:09.760 NVM Sets: Not Supported 00:17:09.760 Read Recovery Levels: Not Supported 00:17:09.760 Endurance Groups: Not Supported 00:17:09.760 Predictable Latency Mode: Not Supported 00:17:09.760 Traffic Based Keep ALive: Not Supported 00:17:09.760 Namespace Granularity: Not Supported 00:17:09.760 SQ Associations: Not Supported 00:17:09.760 UUID List: Not Supported 00:17:09.760 Multi-Domain Subsystem: Not Supported 00:17:09.760 Fixed Capacity Management: Not Supported 00:17:09.760 Variable Capacity Management: Not Supported 00:17:09.760 Delete Endurance Group: Not Supported 00:17:09.760 Delete NVM Set: Not Supported 00:17:09.760 Extended LBA Formats Supported: Not Supported 00:17:09.760 Flexible Data Placement Supported: Not Supported 00:17:09.760 00:17:09.760 Controller Memory Buffer Support 00:17:09.760 ================================ 00:17:09.760 Supported: No 00:17:09.760 00:17:09.760 Persistent Memory Region Support 00:17:09.760 ================================ 00:17:09.760 Supported: No 00:17:09.760 00:17:09.760 Admin Command Set Attributes 00:17:09.760 ============================ 00:17:09.760 Security Send/Receive: Not Supported 00:17:09.760 Format NVM: Not Supported 00:17:09.760 Firmware Activate/Download: Not Supported 00:17:09.760 Namespace Management: Not Supported 00:17:09.761 Device Self-Test: Not Supported 00:17:09.761 Directives: Not Supported 00:17:09.761 NVMe-MI: Not Supported 00:17:09.761 Virtualization Management: Not Supported 00:17:09.761 Doorbell Buffer Config: Not Supported 00:17:09.761 Get LBA Status Capability: Not Supported 00:17:09.761 Command & Feature Lockdown Capability: Not Supported 00:17:09.761 Abort Command Limit: 4 00:17:09.761 Async Event Request Limit: 4 00:17:09.761 Number of Firmware Slots: N/A 00:17:09.761 Firmware Slot 1 Read-Only: N/A 00:17:09.761 Firmware Activation Without Reset: N/A 00:17:09.761 Multiple Update Detection Support: N/A 00:17:09.761 Firmware Update Granularity: No Information Provided 00:17:09.761 Per-Namespace SMART Log: No 00:17:09.761 Asymmetric Namespace Access Log Page: Not Supported 00:17:09.761 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:09.761 Command Effects Log Page: Supported 00:17:09.761 Get Log Page Extended Data: Supported 00:17:09.761 Telemetry Log Pages: Not Supported 00:17:09.761 Persistent Event Log Pages: Not Supported 00:17:09.761 Supported Log Pages Log Page: May Support 00:17:09.761 Commands Supported & Effects Log Page: Not Supported 00:17:09.761 Feature Identifiers & Effects Log Page:May Support 00:17:09.761 NVMe-MI Commands & Effects Log Page: May Support 00:17:09.761 Data Area 4 for Telemetry Log: Not Supported 00:17:09.761 Error Log Page Entries Supported: 128 00:17:09.761 Keep Alive: Supported 00:17:09.761 Keep Alive Granularity: 10000 ms 00:17:09.761 00:17:09.761 NVM Command Set Attributes 00:17:09.761 ========================== 00:17:09.761 Submission Queue Entry Size 00:17:09.761 Max: 64 00:17:09.761 Min: 64 00:17:09.761 Completion Queue Entry Size 00:17:09.761 Max: 16 00:17:09.761 Min: 16 00:17:09.761 Number of Namespaces: 32 00:17:09.761 Compare Command: Supported 00:17:09.761 Write Uncorrectable Command: Not Supported 00:17:09.761 Dataset Management Command: Supported 00:17:09.761 Write Zeroes Command: Supported 00:17:09.761 Set Features Save Field: Not Supported 00:17:09.761 Reservations: Not Supported 00:17:09.761 Timestamp: Not Supported 00:17:09.761 Copy: Supported 00:17:09.761 Volatile Write Cache: Present 00:17:09.761 Atomic Write Unit (Normal): 1 00:17:09.761 Atomic Write Unit (PFail): 1 00:17:09.761 Atomic Compare & Write Unit: 1 00:17:09.761 Fused Compare & Write: Supported 00:17:09.761 Scatter-Gather List 00:17:09.761 SGL Command Set: Supported (Dword aligned) 00:17:09.761 SGL Keyed: Not Supported 00:17:09.761 SGL Bit Bucket Descriptor: Not Supported 00:17:09.761 SGL Metadata Pointer: Not Supported 00:17:09.761 Oversized SGL: Not Supported 00:17:09.761 SGL Metadata Address: Not Supported 00:17:09.761 SGL Offset: Not Supported 00:17:09.761 Transport SGL Data Block: Not Supported 00:17:09.761 Replay Protected Memory Block: Not Supported 00:17:09.761 00:17:09.761 Firmware Slot Information 00:17:09.761 ========================= 00:17:09.761 Active slot: 1 00:17:09.761 Slot 1 Firmware Revision: 24.09 00:17:09.761 00:17:09.761 00:17:09.761 Commands Supported and Effects 00:17:09.761 ============================== 00:17:09.761 Admin Commands 00:17:09.761 -------------- 00:17:09.761 Get Log Page (02h): Supported 00:17:09.761 Identify (06h): Supported 00:17:09.761 Abort (08h): Supported 00:17:09.761 Set Features (09h): Supported 00:17:09.761 Get Features (0Ah): Supported 00:17:09.761 Asynchronous Event Request (0Ch): Supported 00:17:09.761 Keep Alive (18h): Supported 00:17:09.761 I/O Commands 00:17:09.761 ------------ 00:17:09.761 Flush (00h): Supported LBA-Change 00:17:09.761 Write (01h): Supported LBA-Change 00:17:09.761 Read (02h): Supported 00:17:09.761 Compare (05h): Supported 00:17:09.761 Write Zeroes (08h): Supported LBA-Change 00:17:09.761 Dataset Management (09h): Supported LBA-Change 00:17:09.761 Copy (19h): Supported LBA-Change 00:17:09.761 00:17:09.761 Error Log 00:17:09.761 ========= 00:17:09.761 00:17:09.761 Arbitration 00:17:09.761 =========== 00:17:09.761 Arbitration Burst: 1 00:17:09.761 00:17:09.761 Power Management 00:17:09.761 ================ 00:17:09.761 Number of Power States: 1 00:17:09.761 Current Power State: Power State #0 00:17:09.761 Power State #0: 00:17:09.761 Max Power: 0.00 W 00:17:09.761 Non-Operational State: Operational 00:17:09.761 Entry Latency: Not Reported 00:17:09.761 Exit Latency: Not Reported 00:17:09.761 Relative Read Throughput: 0 00:17:09.761 Relative Read Latency: 0 00:17:09.761 Relative Write Throughput: 0 00:17:09.761 Relative Write Latency: 0 00:17:09.761 Idle Power: Not Reported 00:17:09.761 Active Power: Not Reported 00:17:09.761 Non-Operational Permissive Mode: Not Supported 00:17:09.761 00:17:09.761 Health Information 00:17:09.761 ================== 00:17:09.761 Critical Warnings: 00:17:09.761 Available Spare Space: OK 00:17:09.761 Temperature: OK 00:17:09.761 Device Reliability: OK 00:17:09.761 Read Only: No 00:17:09.761 Volatile Memory Backup: OK 00:17:09.761 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:09.761 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:09.761 Available Spare: 0% 00:17:09.761 Available Sp[2024-07-25 07:23:17.119631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:09.761 [2024-07-25 07:23:17.119643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:09.761 [2024-07-25 07:23:17.119668] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:09.761 [2024-07-25 07:23:17.119676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.761 [2024-07-25 07:23:17.119683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.761 [2024-07-25 07:23:17.119689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.761 [2024-07-25 07:23:17.119695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.761 [2024-07-25 07:23:17.119741] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:09.761 [2024-07-25 07:23:17.119751] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:09.761 [2024-07-25 07:23:17.120745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.761 [2024-07-25 07:23:17.120785] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:09.761 [2024-07-25 07:23:17.120791] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:09.761 [2024-07-25 07:23:17.121756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:09.761 [2024-07-25 07:23:17.121769] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:09.761 [2024-07-25 07:23:17.121835] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:10.023 [2024-07-25 07:23:17.126209] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:10.023 are Threshold: 0% 00:17:10.023 Life Percentage Used: 0% 00:17:10.023 Data Units Read: 0 00:17:10.023 Data Units Written: 0 00:17:10.023 Host Read Commands: 0 00:17:10.023 Host Write Commands: 0 00:17:10.023 Controller Busy Time: 0 minutes 00:17:10.023 Power Cycles: 0 00:17:10.023 Power On Hours: 0 hours 00:17:10.023 Unsafe Shutdowns: 0 00:17:10.023 Unrecoverable Media Errors: 0 00:17:10.023 Lifetime Error Log Entries: 0 00:17:10.023 Warning Temperature Time: 0 minutes 00:17:10.023 Critical Temperature Time: 0 minutes 00:17:10.023 00:17:10.023 Number of Queues 00:17:10.023 ================ 00:17:10.023 Number of I/O Submission Queues: 127 00:17:10.023 Number of I/O Completion Queues: 127 00:17:10.023 00:17:10.023 Active Namespaces 00:17:10.023 ================= 00:17:10.023 Namespace ID:1 00:17:10.023 Error Recovery Timeout: Unlimited 00:17:10.023 Command Set Identifier: NVM (00h) 00:17:10.023 Deallocate: Supported 00:17:10.023 Deallocated/Unwritten Error: Not Supported 00:17:10.023 Deallocated Read Value: Unknown 00:17:10.023 Deallocate in Write Zeroes: Not Supported 00:17:10.023 Deallocated Guard Field: 0xFFFF 00:17:10.023 Flush: Supported 00:17:10.023 Reservation: Supported 00:17:10.023 Namespace Sharing Capabilities: Multiple Controllers 00:17:10.023 Size (in LBAs): 131072 (0GiB) 00:17:10.023 Capacity (in LBAs): 131072 (0GiB) 00:17:10.023 Utilization (in LBAs): 131072 (0GiB) 00:17:10.023 NGUID: 3FF0B9070B844359891105DB75BCEA09 00:17:10.023 UUID: 3ff0b907-0b84-4359-8911-05db75bcea09 00:17:10.023 Thin Provisioning: Not Supported 00:17:10.023 Per-NS Atomic Units: Yes 00:17:10.023 Atomic Boundary Size (Normal): 0 00:17:10.023 Atomic Boundary Size (PFail): 0 00:17:10.023 Atomic Boundary Offset: 0 00:17:10.023 Maximum Single Source Range Length: 65535 00:17:10.023 Maximum Copy Length: 65535 00:17:10.023 Maximum Source Range Count: 1 00:17:10.023 NGUID/EUI64 Never Reused: No 00:17:10.023 Namespace Write Protected: No 00:17:10.023 Number of LBA Formats: 1 00:17:10.023 Current LBA Format: LBA Format #00 00:17:10.023 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:10.023 00:17:10.023 07:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:10.023 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.023 [2024-07-25 07:23:17.311853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:15.347 Initializing NVMe Controllers 00:17:15.347 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:15.347 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:15.347 Initialization complete. Launching workers. 00:17:15.347 ======================================================== 00:17:15.347 Latency(us) 00:17:15.347 Device Information : IOPS MiB/s Average min max 00:17:15.347 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40066.85 156.51 3194.54 848.03 6942.97 00:17:15.347 ======================================================== 00:17:15.347 Total : 40066.85 156.51 3194.54 848.03 6942.97 00:17:15.347 00:17:15.347 [2024-07-25 07:23:22.328369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:15.347 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:15.347 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.347 [2024-07-25 07:23:22.513256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:20.635 Initializing NVMe Controllers 00:17:20.635 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:20.635 Initialization complete. Launching workers. 00:17:20.635 ======================================================== 00:17:20.635 Latency(us) 00:17:20.635 Device Information : IOPS MiB/s Average min max 00:17:20.635 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16040.84 62.66 7979.13 6984.20 8059.71 00:17:20.635 ======================================================== 00:17:20.635 Total : 16040.84 62.66 7979.13 6984.20 8059.71 00:17:20.635 00:17:20.635 [2024-07-25 07:23:27.547338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:20.635 07:23:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:20.635 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.635 [2024-07-25 07:23:27.731204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.922 [2024-07-25 07:23:32.809440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.922 Initializing NVMe Controllers 00:17:25.922 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:25.922 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:25.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:25.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:25.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:25.922 Initialization complete. Launching workers. 00:17:25.922 Starting thread on core 2 00:17:25.922 Starting thread on core 3 00:17:25.922 Starting thread on core 1 00:17:25.922 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:25.922 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.922 [2024-07-25 07:23:33.069602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:29.224 [2024-07-25 07:23:36.134111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:29.224 Initializing NVMe Controllers 00:17:29.224 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.224 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:29.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:29.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:29.224 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:29.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:29.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:29.224 Initialization complete. Launching workers. 00:17:29.224 Starting thread on core 1 with urgent priority queue 00:17:29.224 Starting thread on core 2 with urgent priority queue 00:17:29.224 Starting thread on core 3 with urgent priority queue 00:17:29.224 Starting thread on core 0 with urgent priority queue 00:17:29.224 SPDK bdev Controller (SPDK1 ) core 0: 13619.67 IO/s 7.34 secs/100000 ios 00:17:29.224 SPDK bdev Controller (SPDK1 ) core 1: 8092.00 IO/s 12.36 secs/100000 ios 00:17:29.224 SPDK bdev Controller (SPDK1 ) core 2: 12149.67 IO/s 8.23 secs/100000 ios 00:17:29.224 SPDK bdev Controller (SPDK1 ) core 3: 8128.67 IO/s 12.30 secs/100000 ios 00:17:29.224 ======================================================== 00:17:29.224 00:17:29.224 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:29.224 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.224 [2024-07-25 07:23:36.393745] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:29.224 Initializing NVMe Controllers 00:17:29.224 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.224 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:29.224 Namespace ID: 1 size: 0GB 00:17:29.224 Initialization complete. 00:17:29.224 INFO: using host memory buffer for IO 00:17:29.224 Hello world! 00:17:29.224 [2024-07-25 07:23:36.425939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:29.224 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:29.224 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.485 [2024-07-25 07:23:36.690616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:30.427 Initializing NVMe Controllers 00:17:30.427 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:30.427 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:30.427 Initialization complete. Launching workers. 00:17:30.427 submit (in ns) avg, min, max = 8960.8, 3940.8, 4001064.2 00:17:30.427 complete (in ns) avg, min, max = 18316.4, 2374.2, 5994473.3 00:17:30.427 00:17:30.427 Submit histogram 00:17:30.427 ================ 00:17:30.427 Range in us Cumulative Count 00:17:30.427 3.920 - 3.947: 0.1585% ( 30) 00:17:30.427 3.947 - 3.973: 2.8739% ( 514) 00:17:30.427 3.973 - 4.000: 10.6715% ( 1476) 00:17:30.427 4.000 - 4.027: 21.3429% ( 2020) 00:17:30.427 4.027 - 4.053: 32.7645% ( 2162) 00:17:30.427 4.053 - 4.080: 44.0488% ( 2136) 00:17:30.427 4.080 - 4.107: 58.3126% ( 2700) 00:17:30.427 4.107 - 4.133: 73.3636% ( 2849) 00:17:30.427 4.133 - 4.160: 87.2207% ( 2623) 00:17:30.427 4.160 - 4.187: 94.8492% ( 1444) 00:17:30.427 4.187 - 4.213: 97.8868% ( 575) 00:17:30.427 4.213 - 4.240: 98.9381% ( 199) 00:17:30.427 4.240 - 4.267: 99.2710% ( 63) 00:17:30.427 4.267 - 4.293: 99.3449% ( 14) 00:17:30.427 4.293 - 4.320: 99.3608% ( 3) 00:17:30.427 4.320 - 4.347: 99.3661% ( 1) 00:17:30.427 4.347 - 4.373: 99.3713% ( 1) 00:17:30.427 4.400 - 4.427: 99.3766% ( 1) 00:17:30.427 4.427 - 4.453: 99.3819% ( 1) 00:17:30.427 4.613 - 4.640: 99.3872% ( 1) 00:17:30.427 4.827 - 4.853: 99.3925% ( 1) 00:17:30.427 4.880 - 4.907: 99.3977% ( 1) 00:17:30.427 4.933 - 4.960: 99.4030% ( 1) 00:17:30.427 4.960 - 4.987: 99.4083% ( 1) 00:17:30.427 5.013 - 5.040: 99.4136% ( 1) 00:17:30.427 5.067 - 5.093: 99.4189% ( 1) 00:17:30.427 5.173 - 5.200: 99.4242% ( 1) 00:17:30.427 5.200 - 5.227: 99.4294% ( 1) 00:17:30.427 5.387 - 5.413: 99.4347% ( 1) 00:17:30.427 5.413 - 5.440: 99.4400% ( 1) 00:17:30.427 5.707 - 5.733: 99.4453% ( 1) 00:17:30.427 6.027 - 6.053: 99.4559% ( 2) 00:17:30.427 6.107 - 6.133: 99.4611% ( 1) 00:17:30.427 6.160 - 6.187: 99.4664% ( 1) 00:17:30.427 6.213 - 6.240: 99.4717% ( 1) 00:17:30.427 6.267 - 6.293: 99.4823% ( 2) 00:17:30.427 6.347 - 6.373: 99.4876% ( 1) 00:17:30.427 7.093 - 7.147: 99.4928% ( 1) 00:17:30.427 7.147 - 7.200: 99.4981% ( 1) 00:17:30.427 7.200 - 7.253: 99.5034% ( 1) 00:17:30.427 7.360 - 7.413: 99.5140% ( 2) 00:17:30.427 7.413 - 7.467: 99.5193% ( 1) 00:17:30.427 7.467 - 7.520: 99.5404% ( 4) 00:17:30.427 7.520 - 7.573: 99.5457% ( 1) 00:17:30.427 7.573 - 7.627: 99.5510% ( 1) 00:17:30.427 7.627 - 7.680: 99.5615% ( 2) 00:17:30.427 7.680 - 7.733: 99.5668% ( 1) 00:17:30.427 7.733 - 7.787: 99.5774% ( 2) 00:17:30.427 7.787 - 7.840: 99.5879% ( 2) 00:17:30.427 7.840 - 7.893: 99.6038% ( 3) 00:17:30.427 7.893 - 7.947: 99.6091% ( 1) 00:17:30.427 7.947 - 8.000: 99.6196% ( 2) 00:17:30.427 8.000 - 8.053: 99.6460% ( 5) 00:17:30.427 8.053 - 8.107: 99.6619% ( 3) 00:17:30.427 8.107 - 8.160: 99.6777% ( 3) 00:17:30.427 8.160 - 8.213: 99.6883% ( 2) 00:17:30.427 8.213 - 8.267: 99.6936% ( 1) 00:17:30.427 8.267 - 8.320: 99.6989% ( 1) 00:17:30.427 8.320 - 8.373: 99.7094% ( 2) 00:17:30.427 8.373 - 8.427: 99.7359% ( 5) 00:17:30.427 8.427 - 8.480: 99.7411% ( 1) 00:17:30.427 8.480 - 8.533: 99.7570% ( 3) 00:17:30.427 8.533 - 8.587: 99.7728% ( 3) 00:17:30.427 8.587 - 8.640: 99.7781% ( 1) 00:17:30.427 8.640 - 8.693: 99.7887% ( 2) 00:17:30.427 8.693 - 8.747: 99.7940% ( 1) 00:17:30.427 8.907 - 8.960: 99.8045% ( 2) 00:17:30.427 8.960 - 9.013: 99.8098% ( 1) 00:17:30.427 9.013 - 9.067: 99.8204% ( 2) 00:17:30.427 9.173 - 9.227: 99.8257% ( 1) 00:17:30.427 9.547 - 9.600: 99.8309% ( 1) 00:17:30.428 9.707 - 9.760: 99.8362% ( 1) 00:17:30.428 9.973 - 10.027: 99.8415% ( 1) 00:17:30.428 11.040 - 11.093: 99.8468% ( 1) 00:17:30.428 12.533 - 12.587: 99.8521% ( 1) 00:17:30.428 13.973 - 14.080: 99.8574% ( 1) 00:17:30.428 14.507 - 14.613: 99.8626% ( 1) 00:17:30.428 15.467 - 15.573: 99.8679% ( 1) 00:17:30.428 16.427 - 16.533: 99.8732% ( 1) 00:17:30.428 [2024-07-25 07:23:37.710131] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:30.428 17.493 - 17.600: 99.8785% ( 1) 00:17:30.428 3986.773 - 4014.080: 100.0000% ( 23) 00:17:30.428 00:17:30.428 Complete histogram 00:17:30.428 ================== 00:17:30.428 Range in us Cumulative Count 00:17:30.428 2.373 - 2.387: 0.0053% ( 1) 00:17:30.428 2.387 - 2.400: 0.6709% ( 126) 00:17:30.428 2.400 - 2.413: 0.9668% ( 56) 00:17:30.428 2.413 - 2.427: 1.1041% ( 26) 00:17:30.428 2.427 - 2.440: 1.4686% ( 69) 00:17:30.428 2.440 - 2.453: 48.4231% ( 8888) 00:17:30.428 2.453 - 2.467: 55.0848% ( 1261) 00:17:30.428 2.467 - 2.480: 74.7953% ( 3731) 00:17:30.428 2.480 - 2.493: 79.9197% ( 970) 00:17:30.428 2.493 - 2.507: 81.9219% ( 379) 00:17:30.428 2.507 - 2.520: 86.3014% ( 829) 00:17:30.428 2.520 - 2.533: 92.0651% ( 1091) 00:17:30.428 2.533 - 2.547: 95.6944% ( 687) 00:17:30.428 2.547 - 2.560: 97.8816% ( 414) 00:17:30.428 2.560 - 2.573: 98.9804% ( 208) 00:17:30.428 2.573 - 2.587: 99.2921% ( 59) 00:17:30.428 2.587 - 2.600: 99.3449% ( 10) 00:17:30.428 2.600 - 2.613: 99.3713% ( 5) 00:17:30.428 2.613 - 2.627: 99.3766% ( 1) 00:17:30.428 2.627 - 2.640: 99.3819% ( 1) 00:17:30.428 3.000 - 3.013: 99.3872% ( 1) 00:17:30.428 5.280 - 5.307: 99.3977% ( 2) 00:17:30.428 5.653 - 5.680: 99.4030% ( 1) 00:17:30.428 5.760 - 5.787: 99.4083% ( 1) 00:17:30.428 5.813 - 5.840: 99.4189% ( 2) 00:17:30.428 5.840 - 5.867: 99.4242% ( 1) 00:17:30.428 6.080 - 6.107: 99.4294% ( 1) 00:17:30.428 6.160 - 6.187: 99.4347% ( 1) 00:17:30.428 6.187 - 6.213: 99.4400% ( 1) 00:17:30.428 6.240 - 6.267: 99.4453% ( 1) 00:17:30.428 6.267 - 6.293: 99.4506% ( 1) 00:17:30.428 6.293 - 6.320: 99.4559% ( 1) 00:17:30.428 6.400 - 6.427: 99.4611% ( 1) 00:17:30.428 6.427 - 6.453: 99.4664% ( 1) 00:17:30.428 6.453 - 6.480: 99.4770% ( 2) 00:17:30.428 6.507 - 6.533: 99.4823% ( 1) 00:17:30.428 6.560 - 6.587: 99.4876% ( 1) 00:17:30.428 6.640 - 6.667: 99.4928% ( 1) 00:17:30.428 6.667 - 6.693: 99.4981% ( 1) 00:17:30.428 6.827 - 6.880: 99.5034% ( 1) 00:17:30.428 6.880 - 6.933: 99.5298% ( 5) 00:17:30.428 6.933 - 6.987: 99.5351% ( 1) 00:17:30.428 6.987 - 7.040: 99.5404% ( 1) 00:17:30.428 7.040 - 7.093: 99.5457% ( 1) 00:17:30.428 7.147 - 7.200: 99.5562% ( 2) 00:17:30.428 8.053 - 8.107: 99.5615% ( 1) 00:17:30.428 8.427 - 8.480: 99.5668% ( 1) 00:17:30.428 9.120 - 9.173: 99.5721% ( 1) 00:17:30.428 9.280 - 9.333: 99.5774% ( 1) 00:17:30.428 11.413 - 11.467: 99.5827% ( 1) 00:17:30.428 12.107 - 12.160: 99.5879% ( 1) 00:17:30.428 15.573 - 15.680: 99.5932% ( 1) 00:17:30.428 21.547 - 21.653: 99.5985% ( 1) 00:17:30.428 34.133 - 34.347: 99.6038% ( 1) 00:17:30.428 45.867 - 46.080: 99.6091% ( 1) 00:17:30.428 3986.773 - 4014.080: 99.9894% ( 72) 00:17:30.428 5980.160 - 6007.467: 100.0000% ( 2) 00:17:30.428 00:17:30.428 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:30.428 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:30.428 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:30.428 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:30.428 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:30.689 [ 00:17:30.689 { 00:17:30.689 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:30.689 "subtype": "Discovery", 00:17:30.689 "listen_addresses": [], 00:17:30.689 "allow_any_host": true, 00:17:30.689 "hosts": [] 00:17:30.689 }, 00:17:30.689 { 00:17:30.689 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:30.689 "subtype": "NVMe", 00:17:30.689 "listen_addresses": [ 00:17:30.689 { 00:17:30.689 "trtype": "VFIOUSER", 00:17:30.689 "adrfam": "IPv4", 00:17:30.689 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:30.689 "trsvcid": "0" 00:17:30.689 } 00:17:30.689 ], 00:17:30.689 "allow_any_host": true, 00:17:30.689 "hosts": [], 00:17:30.689 "serial_number": "SPDK1", 00:17:30.689 "model_number": "SPDK bdev Controller", 00:17:30.689 "max_namespaces": 32, 00:17:30.689 "min_cntlid": 1, 00:17:30.689 "max_cntlid": 65519, 00:17:30.689 "namespaces": [ 00:17:30.689 { 00:17:30.689 "nsid": 1, 00:17:30.689 "bdev_name": "Malloc1", 00:17:30.689 "name": "Malloc1", 00:17:30.689 "nguid": "3FF0B9070B844359891105DB75BCEA09", 00:17:30.689 "uuid": "3ff0b907-0b84-4359-8911-05db75bcea09" 00:17:30.689 } 00:17:30.689 ] 00:17:30.689 }, 00:17:30.689 { 00:17:30.689 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:30.689 "subtype": "NVMe", 00:17:30.689 "listen_addresses": [ 00:17:30.689 { 00:17:30.689 "trtype": "VFIOUSER", 00:17:30.689 "adrfam": "IPv4", 00:17:30.689 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:30.689 "trsvcid": "0" 00:17:30.689 } 00:17:30.689 ], 00:17:30.689 "allow_any_host": true, 00:17:30.689 "hosts": [], 00:17:30.689 "serial_number": "SPDK2", 00:17:30.689 "model_number": "SPDK bdev Controller", 00:17:30.689 "max_namespaces": 32, 00:17:30.689 "min_cntlid": 1, 00:17:30.689 "max_cntlid": 65519, 00:17:30.689 "namespaces": [ 00:17:30.689 { 00:17:30.689 "nsid": 1, 00:17:30.689 "bdev_name": "Malloc2", 00:17:30.689 "name": "Malloc2", 00:17:30.689 "nguid": "42A19C7388F747D7991AEAD867968299", 00:17:30.689 "uuid": "42a19c73-88f7-47d7-991a-ead867968299" 00:17:30.689 } 00:17:30.689 ] 00:17:30.689 } 00:17:30.689 ] 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=72576 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:30.689 07:23:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:30.689 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.950 Malloc3 00:17:30.950 [2024-07-25 07:23:38.096649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:30.950 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:30.950 [2024-07-25 07:23:38.266856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:30.950 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:30.950 Asynchronous Event Request test 00:17:30.950 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:30.950 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:30.950 Registering asynchronous event callbacks... 00:17:30.950 Starting namespace attribute notice tests for all controllers... 00:17:30.950 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:30.950 aer_cb - Changed Namespace 00:17:30.950 Cleaning up... 00:17:31.211 [ 00:17:31.211 { 00:17:31.211 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:31.211 "subtype": "Discovery", 00:17:31.211 "listen_addresses": [], 00:17:31.211 "allow_any_host": true, 00:17:31.211 "hosts": [] 00:17:31.211 }, 00:17:31.211 { 00:17:31.211 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:31.211 "subtype": "NVMe", 00:17:31.211 "listen_addresses": [ 00:17:31.211 { 00:17:31.211 "trtype": "VFIOUSER", 00:17:31.211 "adrfam": "IPv4", 00:17:31.211 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:31.211 "trsvcid": "0" 00:17:31.211 } 00:17:31.212 ], 00:17:31.212 "allow_any_host": true, 00:17:31.212 "hosts": [], 00:17:31.212 "serial_number": "SPDK1", 00:17:31.212 "model_number": "SPDK bdev Controller", 00:17:31.212 "max_namespaces": 32, 00:17:31.212 "min_cntlid": 1, 00:17:31.212 "max_cntlid": 65519, 00:17:31.212 "namespaces": [ 00:17:31.212 { 00:17:31.212 "nsid": 1, 00:17:31.212 "bdev_name": "Malloc1", 00:17:31.212 "name": "Malloc1", 00:17:31.212 "nguid": "3FF0B9070B844359891105DB75BCEA09", 00:17:31.212 "uuid": "3ff0b907-0b84-4359-8911-05db75bcea09" 00:17:31.212 }, 00:17:31.212 { 00:17:31.212 "nsid": 2, 00:17:31.212 "bdev_name": "Malloc3", 00:17:31.212 "name": "Malloc3", 00:17:31.212 "nguid": "F6FC35D16F674477AE5EA77C0030126C", 00:17:31.212 "uuid": "f6fc35d1-6f67-4477-ae5e-a77c0030126c" 00:17:31.212 } 00:17:31.212 ] 00:17:31.212 }, 00:17:31.212 { 00:17:31.212 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:31.212 "subtype": "NVMe", 00:17:31.212 "listen_addresses": [ 00:17:31.212 { 00:17:31.212 "trtype": "VFIOUSER", 00:17:31.212 "adrfam": "IPv4", 00:17:31.212 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:31.212 "trsvcid": "0" 00:17:31.212 } 00:17:31.212 ], 00:17:31.212 "allow_any_host": true, 00:17:31.212 "hosts": [], 00:17:31.212 "serial_number": "SPDK2", 00:17:31.212 "model_number": "SPDK bdev Controller", 00:17:31.212 "max_namespaces": 32, 00:17:31.212 "min_cntlid": 1, 00:17:31.212 "max_cntlid": 65519, 00:17:31.212 "namespaces": [ 00:17:31.212 { 00:17:31.212 "nsid": 1, 00:17:31.212 "bdev_name": "Malloc2", 00:17:31.212 "name": "Malloc2", 00:17:31.212 "nguid": "42A19C7388F747D7991AEAD867968299", 00:17:31.212 "uuid": "42a19c73-88f7-47d7-991a-ead867968299" 00:17:31.212 } 00:17:31.212 ] 00:17:31.212 } 00:17:31.212 ] 00:17:31.212 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 72576 00:17:31.212 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:31.212 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:31.212 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:31.212 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:31.212 [2024-07-25 07:23:38.475151] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:17:31.212 [2024-07-25 07:23:38.475193] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72585 ] 00:17:31.212 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.212 [2024-07-25 07:23:38.507758] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:31.212 [2024-07-25 07:23:38.516425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:31.212 [2024-07-25 07:23:38.516447] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f02cb461000 00:17:31.212 [2024-07-25 07:23:38.517426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.518435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.519443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.520453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.521455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.522469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.523475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.524477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:31.212 [2024-07-25 07:23:38.525492] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:31.212 [2024-07-25 07:23:38.525501] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f02cb456000 00:17:31.212 [2024-07-25 07:23:38.526827] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:31.212 [2024-07-25 07:23:38.543042] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:31.212 [2024-07-25 07:23:38.543064] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:31.212 [2024-07-25 07:23:38.545110] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:31.212 [2024-07-25 07:23:38.545158] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:31.212 [2024-07-25 07:23:38.545243] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:31.212 [2024-07-25 07:23:38.545255] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:31.212 [2024-07-25 07:23:38.545260] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:31.212 [2024-07-25 07:23:38.546119] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:31.212 [2024-07-25 07:23:38.546132] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:31.212 [2024-07-25 07:23:38.546139] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:31.212 [2024-07-25 07:23:38.547123] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:31.212 [2024-07-25 07:23:38.547133] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:31.212 [2024-07-25 07:23:38.547141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:31.212 [2024-07-25 07:23:38.550207] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:31.212 [2024-07-25 07:23:38.550217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:31.212 [2024-07-25 07:23:38.551153] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:31.212 [2024-07-25 07:23:38.551162] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:31.212 [2024-07-25 07:23:38.551168] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:31.212 [2024-07-25 07:23:38.551176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:31.212 [2024-07-25 07:23:38.551283] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:31.212 [2024-07-25 07:23:38.551288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:31.212 [2024-07-25 07:23:38.551293] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:31.212 [2024-07-25 07:23:38.552154] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:31.212 [2024-07-25 07:23:38.553158] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:31.212 [2024-07-25 07:23:38.554167] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:31.212 [2024-07-25 07:23:38.555166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.212 [2024-07-25 07:23:38.555210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:31.212 [2024-07-25 07:23:38.556176] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:31.212 [2024-07-25 07:23:38.556185] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:31.212 [2024-07-25 07:23:38.556190] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:31.212 [2024-07-25 07:23:38.556215] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:31.212 [2024-07-25 07:23:38.556226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:31.212 [2024-07-25 07:23:38.556239] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:31.212 [2024-07-25 07:23:38.556245] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.212 [2024-07-25 07:23:38.556249] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.212 [2024-07-25 07:23:38.556261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.212 [2024-07-25 07:23:38.561210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:31.212 [2024-07-25 07:23:38.561221] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:31.212 [2024-07-25 07:23:38.561226] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:31.213 [2024-07-25 07:23:38.561231] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:31.213 [2024-07-25 07:23:38.561235] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:31.213 [2024-07-25 07:23:38.561240] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:31.213 [2024-07-25 07:23:38.561244] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:31.213 [2024-07-25 07:23:38.561249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:31.213 [2024-07-25 07:23:38.561256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:31.213 [2024-07-25 07:23:38.561268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:31.213 [2024-07-25 07:23:38.569208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:31.213 [2024-07-25 07:23:38.569224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.213 [2024-07-25 07:23:38.569233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.213 [2024-07-25 07:23:38.569241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.213 [2024-07-25 07:23:38.569250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:31.213 [2024-07-25 07:23:38.569254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:31.213 [2024-07-25 07:23:38.569263] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:31.213 [2024-07-25 07:23:38.569272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:31.213 [2024-07-25 07:23:38.577209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:31.213 [2024-07-25 07:23:38.577217] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:31.213 [2024-07-25 07:23:38.577222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:31.213 [2024-07-25 07:23:38.577231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:31.213 [2024-07-25 07:23:38.577236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:31.213 [2024-07-25 07:23:38.577245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:31.475 [2024-07-25 07:23:38.585216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:31.475 [2024-07-25 07:23:38.585282] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:31.475 [2024-07-25 07:23:38.585290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:31.475 [2024-07-25 07:23:38.585298] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:31.475 [2024-07-25 07:23:38.585303] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:31.475 [2024-07-25 07:23:38.585306] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.475 [2024-07-25 07:23:38.585312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:31.475 [2024-07-25 07:23:38.593211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:31.475 [2024-07-25 07:23:38.593222] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:31.475 [2024-07-25 07:23:38.593230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:31.475 [2024-07-25 07:23:38.593238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:31.475 [2024-07-25 07:23:38.593245] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:31.475 [2024-07-25 07:23:38.593249] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.475 [2024-07-25 07:23:38.593253] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.475 [2024-07-25 07:23:38.593259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.475 [2024-07-25 07:23:38.601207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:31.475 [2024-07-25 07:23:38.601220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:31.475 [2024-07-25 07:23:38.601227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:31.475 [2024-07-25 07:23:38.601235] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:31.475 [2024-07-25 07:23:38.601239] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.475 [2024-07-25 07:23:38.601242] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.475 [2024-07-25 07:23:38.601249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.475 [2024-07-25 07:23:38.609207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:31.475 [2024-07-25 07:23:38.609217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:31.476 [2024-07-25 07:23:38.609224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:31.476 [2024-07-25 07:23:38.609233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:31.476 [2024-07-25 07:23:38.609241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:31.476 [2024-07-25 07:23:38.609249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:31.476 [2024-07-25 07:23:38.609254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:31.476 [2024-07-25 07:23:38.609259] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:31.476 [2024-07-25 07:23:38.609263] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:31.476 [2024-07-25 07:23:38.609268] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:31.476 [2024-07-25 07:23:38.609284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:31.476 [2024-07-25 07:23:38.617209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:31.476 [2024-07-25 07:23:38.617222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:31.476 [2024-07-25 07:23:38.625207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:31.476 [2024-07-25 07:23:38.625221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:31.476 [2024-07-25 07:23:38.633208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:31.476 [2024-07-25 07:23:38.633221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:31.476 [2024-07-25 07:23:38.641207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:31.476 [2024-07-25 07:23:38.641223] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:31.476 [2024-07-25 07:23:38.641227] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:31.476 [2024-07-25 07:23:38.641231] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:31.476 [2024-07-25 07:23:38.641235] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:31.476 [2024-07-25 07:23:38.641238] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:31.476 [2024-07-25 07:23:38.641245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:31.476 [2024-07-25 07:23:38.641252] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:31.476 [2024-07-25 07:23:38.641256] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:31.476 [2024-07-25 07:23:38.641260] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.476 [2024-07-25 07:23:38.641266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:31.476 [2024-07-25 07:23:38.641273] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:31.476 [2024-07-25 07:23:38.641277] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:31.476 [2024-07-25 07:23:38.641281] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.476 [2024-07-25 07:23:38.641286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:31.476 [2024-07-25 07:23:38.641296] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:31.476 [2024-07-25 07:23:38.641301] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:31.476 [2024-07-25 07:23:38.641304] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:31.476 [2024-07-25 07:23:38.641310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:31.476 [2024-07-25 07:23:38.649206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:31.476 [2024-07-25 07:23:38.649220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:31.476 [2024-07-25 07:23:38.649231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:31.476 [2024-07-25 07:23:38.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:31.476 ===================================================== 00:17:31.476 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:31.476 ===================================================== 00:17:31.476 Controller Capabilities/Features 00:17:31.476 ================================ 00:17:31.476 Vendor ID: 4e58 00:17:31.476 Subsystem Vendor ID: 4e58 00:17:31.476 Serial Number: SPDK2 00:17:31.476 Model Number: SPDK bdev Controller 00:17:31.476 Firmware Version: 24.09 00:17:31.476 Recommended Arb Burst: 6 00:17:31.476 IEEE OUI Identifier: 8d 6b 50 00:17:31.476 Multi-path I/O 00:17:31.476 May have multiple subsystem ports: Yes 00:17:31.476 May have multiple controllers: Yes 00:17:31.476 Associated with SR-IOV VF: No 00:17:31.476 Max Data Transfer Size: 131072 00:17:31.476 Max Number of Namespaces: 32 00:17:31.476 Max Number of I/O Queues: 127 00:17:31.476 NVMe Specification Version (VS): 1.3 00:17:31.476 NVMe Specification Version (Identify): 1.3 00:17:31.476 Maximum Queue Entries: 256 00:17:31.476 Contiguous Queues Required: Yes 00:17:31.476 Arbitration Mechanisms Supported 00:17:31.476 Weighted Round Robin: Not Supported 00:17:31.476 Vendor Specific: Not Supported 00:17:31.476 Reset Timeout: 15000 ms 00:17:31.476 Doorbell Stride: 4 bytes 00:17:31.476 NVM Subsystem Reset: Not Supported 00:17:31.476 Command Sets Supported 00:17:31.476 NVM Command Set: Supported 00:17:31.476 Boot Partition: Not Supported 00:17:31.476 Memory Page Size Minimum: 4096 bytes 00:17:31.476 Memory Page Size Maximum: 4096 bytes 00:17:31.476 Persistent Memory Region: Not Supported 00:17:31.476 Optional Asynchronous Events Supported 00:17:31.476 Namespace Attribute Notices: Supported 00:17:31.476 Firmware Activation Notices: Not Supported 00:17:31.476 ANA Change Notices: Not Supported 00:17:31.476 PLE Aggregate Log Change Notices: Not Supported 00:17:31.476 LBA Status Info Alert Notices: Not Supported 00:17:31.476 EGE Aggregate Log Change Notices: Not Supported 00:17:31.476 Normal NVM Subsystem Shutdown event: Not Supported 00:17:31.476 Zone Descriptor Change Notices: Not Supported 00:17:31.476 Discovery Log Change Notices: Not Supported 00:17:31.476 Controller Attributes 00:17:31.476 128-bit Host Identifier: Supported 00:17:31.476 Non-Operational Permissive Mode: Not Supported 00:17:31.476 NVM Sets: Not Supported 00:17:31.476 Read Recovery Levels: Not Supported 00:17:31.476 Endurance Groups: Not Supported 00:17:31.476 Predictable Latency Mode: Not Supported 00:17:31.476 Traffic Based Keep ALive: Not Supported 00:17:31.476 Namespace Granularity: Not Supported 00:17:31.476 SQ Associations: Not Supported 00:17:31.476 UUID List: Not Supported 00:17:31.476 Multi-Domain Subsystem: Not Supported 00:17:31.476 Fixed Capacity Management: Not Supported 00:17:31.476 Variable Capacity Management: Not Supported 00:17:31.476 Delete Endurance Group: Not Supported 00:17:31.476 Delete NVM Set: Not Supported 00:17:31.476 Extended LBA Formats Supported: Not Supported 00:17:31.476 Flexible Data Placement Supported: Not Supported 00:17:31.476 00:17:31.476 Controller Memory Buffer Support 00:17:31.476 ================================ 00:17:31.476 Supported: No 00:17:31.476 00:17:31.476 Persistent Memory Region Support 00:17:31.476 ================================ 00:17:31.476 Supported: No 00:17:31.476 00:17:31.476 Admin Command Set Attributes 00:17:31.476 ============================ 00:17:31.476 Security Send/Receive: Not Supported 00:17:31.476 Format NVM: Not Supported 00:17:31.476 Firmware Activate/Download: Not Supported 00:17:31.476 Namespace Management: Not Supported 00:17:31.476 Device Self-Test: Not Supported 00:17:31.476 Directives: Not Supported 00:17:31.476 NVMe-MI: Not Supported 00:17:31.476 Virtualization Management: Not Supported 00:17:31.476 Doorbell Buffer Config: Not Supported 00:17:31.476 Get LBA Status Capability: Not Supported 00:17:31.476 Command & Feature Lockdown Capability: Not Supported 00:17:31.476 Abort Command Limit: 4 00:17:31.476 Async Event Request Limit: 4 00:17:31.476 Number of Firmware Slots: N/A 00:17:31.476 Firmware Slot 1 Read-Only: N/A 00:17:31.476 Firmware Activation Without Reset: N/A 00:17:31.476 Multiple Update Detection Support: N/A 00:17:31.476 Firmware Update Granularity: No Information Provided 00:17:31.476 Per-Namespace SMART Log: No 00:17:31.476 Asymmetric Namespace Access Log Page: Not Supported 00:17:31.476 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:31.476 Command Effects Log Page: Supported 00:17:31.476 Get Log Page Extended Data: Supported 00:17:31.476 Telemetry Log Pages: Not Supported 00:17:31.476 Persistent Event Log Pages: Not Supported 00:17:31.476 Supported Log Pages Log Page: May Support 00:17:31.476 Commands Supported & Effects Log Page: Not Supported 00:17:31.476 Feature Identifiers & Effects Log Page:May Support 00:17:31.477 NVMe-MI Commands & Effects Log Page: May Support 00:17:31.477 Data Area 4 for Telemetry Log: Not Supported 00:17:31.477 Error Log Page Entries Supported: 128 00:17:31.477 Keep Alive: Supported 00:17:31.477 Keep Alive Granularity: 10000 ms 00:17:31.477 00:17:31.477 NVM Command Set Attributes 00:17:31.477 ========================== 00:17:31.477 Submission Queue Entry Size 00:17:31.477 Max: 64 00:17:31.477 Min: 64 00:17:31.477 Completion Queue Entry Size 00:17:31.477 Max: 16 00:17:31.477 Min: 16 00:17:31.477 Number of Namespaces: 32 00:17:31.477 Compare Command: Supported 00:17:31.477 Write Uncorrectable Command: Not Supported 00:17:31.477 Dataset Management Command: Supported 00:17:31.477 Write Zeroes Command: Supported 00:17:31.477 Set Features Save Field: Not Supported 00:17:31.477 Reservations: Not Supported 00:17:31.477 Timestamp: Not Supported 00:17:31.477 Copy: Supported 00:17:31.477 Volatile Write Cache: Present 00:17:31.477 Atomic Write Unit (Normal): 1 00:17:31.477 Atomic Write Unit (PFail): 1 00:17:31.477 Atomic Compare & Write Unit: 1 00:17:31.477 Fused Compare & Write: Supported 00:17:31.477 Scatter-Gather List 00:17:31.477 SGL Command Set: Supported (Dword aligned) 00:17:31.477 SGL Keyed: Not Supported 00:17:31.477 SGL Bit Bucket Descriptor: Not Supported 00:17:31.477 SGL Metadata Pointer: Not Supported 00:17:31.477 Oversized SGL: Not Supported 00:17:31.477 SGL Metadata Address: Not Supported 00:17:31.477 SGL Offset: Not Supported 00:17:31.477 Transport SGL Data Block: Not Supported 00:17:31.477 Replay Protected Memory Block: Not Supported 00:17:31.477 00:17:31.477 Firmware Slot Information 00:17:31.477 ========================= 00:17:31.477 Active slot: 1 00:17:31.477 Slot 1 Firmware Revision: 24.09 00:17:31.477 00:17:31.477 00:17:31.477 Commands Supported and Effects 00:17:31.477 ============================== 00:17:31.477 Admin Commands 00:17:31.477 -------------- 00:17:31.477 Get Log Page (02h): Supported 00:17:31.477 Identify (06h): Supported 00:17:31.477 Abort (08h): Supported 00:17:31.477 Set Features (09h): Supported 00:17:31.477 Get Features (0Ah): Supported 00:17:31.477 Asynchronous Event Request (0Ch): Supported 00:17:31.477 Keep Alive (18h): Supported 00:17:31.477 I/O Commands 00:17:31.477 ------------ 00:17:31.477 Flush (00h): Supported LBA-Change 00:17:31.477 Write (01h): Supported LBA-Change 00:17:31.477 Read (02h): Supported 00:17:31.477 Compare (05h): Supported 00:17:31.477 Write Zeroes (08h): Supported LBA-Change 00:17:31.477 Dataset Management (09h): Supported LBA-Change 00:17:31.477 Copy (19h): Supported LBA-Change 00:17:31.477 00:17:31.477 Error Log 00:17:31.477 ========= 00:17:31.477 00:17:31.477 Arbitration 00:17:31.477 =========== 00:17:31.477 Arbitration Burst: 1 00:17:31.477 00:17:31.477 Power Management 00:17:31.477 ================ 00:17:31.477 Number of Power States: 1 00:17:31.477 Current Power State: Power State #0 00:17:31.477 Power State #0: 00:17:31.477 Max Power: 0.00 W 00:17:31.477 Non-Operational State: Operational 00:17:31.477 Entry Latency: Not Reported 00:17:31.477 Exit Latency: Not Reported 00:17:31.477 Relative Read Throughput: 0 00:17:31.477 Relative Read Latency: 0 00:17:31.477 Relative Write Throughput: 0 00:17:31.477 Relative Write Latency: 0 00:17:31.477 Idle Power: Not Reported 00:17:31.477 Active Power: Not Reported 00:17:31.477 Non-Operational Permissive Mode: Not Supported 00:17:31.477 00:17:31.477 Health Information 00:17:31.477 ================== 00:17:31.477 Critical Warnings: 00:17:31.477 Available Spare Space: OK 00:17:31.477 Temperature: OK 00:17:31.477 Device Reliability: OK 00:17:31.477 Read Only: No 00:17:31.477 Volatile Memory Backup: OK 00:17:31.477 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:31.477 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:31.477 Available Spare: 0% 00:17:31.477 Available Sp[2024-07-25 07:23:38.649338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:31.477 [2024-07-25 07:23:38.657208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:31.477 [2024-07-25 07:23:38.657237] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:31.477 [2024-07-25 07:23:38.657246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.477 [2024-07-25 07:23:38.657252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.477 [2024-07-25 07:23:38.657259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.477 [2024-07-25 07:23:38.657265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:31.477 [2024-07-25 07:23:38.657314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:31.477 [2024-07-25 07:23:38.657324] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:31.477 [2024-07-25 07:23:38.658319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.477 [2024-07-25 07:23:38.658368] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:31.477 [2024-07-25 07:23:38.658375] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:31.477 [2024-07-25 07:23:38.659322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:31.477 [2024-07-25 07:23:38.659333] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:31.477 [2024-07-25 07:23:38.659382] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:31.477 [2024-07-25 07:23:38.662209] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:31.477 are Threshold: 0% 00:17:31.477 Life Percentage Used: 0% 00:17:31.477 Data Units Read: 0 00:17:31.477 Data Units Written: 0 00:17:31.477 Host Read Commands: 0 00:17:31.477 Host Write Commands: 0 00:17:31.477 Controller Busy Time: 0 minutes 00:17:31.477 Power Cycles: 0 00:17:31.477 Power On Hours: 0 hours 00:17:31.477 Unsafe Shutdowns: 0 00:17:31.477 Unrecoverable Media Errors: 0 00:17:31.477 Lifetime Error Log Entries: 0 00:17:31.477 Warning Temperature Time: 0 minutes 00:17:31.477 Critical Temperature Time: 0 minutes 00:17:31.477 00:17:31.477 Number of Queues 00:17:31.477 ================ 00:17:31.477 Number of I/O Submission Queues: 127 00:17:31.477 Number of I/O Completion Queues: 127 00:17:31.477 00:17:31.477 Active Namespaces 00:17:31.477 ================= 00:17:31.477 Namespace ID:1 00:17:31.477 Error Recovery Timeout: Unlimited 00:17:31.477 Command Set Identifier: NVM (00h) 00:17:31.477 Deallocate: Supported 00:17:31.477 Deallocated/Unwritten Error: Not Supported 00:17:31.477 Deallocated Read Value: Unknown 00:17:31.477 Deallocate in Write Zeroes: Not Supported 00:17:31.477 Deallocated Guard Field: 0xFFFF 00:17:31.477 Flush: Supported 00:17:31.477 Reservation: Supported 00:17:31.477 Namespace Sharing Capabilities: Multiple Controllers 00:17:31.477 Size (in LBAs): 131072 (0GiB) 00:17:31.477 Capacity (in LBAs): 131072 (0GiB) 00:17:31.477 Utilization (in LBAs): 131072 (0GiB) 00:17:31.477 NGUID: 42A19C7388F747D7991AEAD867968299 00:17:31.477 UUID: 42a19c73-88f7-47d7-991a-ead867968299 00:17:31.477 Thin Provisioning: Not Supported 00:17:31.477 Per-NS Atomic Units: Yes 00:17:31.477 Atomic Boundary Size (Normal): 0 00:17:31.477 Atomic Boundary Size (PFail): 0 00:17:31.477 Atomic Boundary Offset: 0 00:17:31.477 Maximum Single Source Range Length: 65535 00:17:31.477 Maximum Copy Length: 65535 00:17:31.477 Maximum Source Range Count: 1 00:17:31.477 NGUID/EUI64 Never Reused: No 00:17:31.477 Namespace Write Protected: No 00:17:31.477 Number of LBA Formats: 1 00:17:31.477 Current LBA Format: LBA Format #00 00:17:31.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:31.477 00:17:31.477 07:23:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:31.477 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.738 [2024-07-25 07:23:38.847592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:37.025 Initializing NVMe Controllers 00:17:37.025 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:37.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:37.025 Initialization complete. Launching workers. 00:17:37.025 ======================================================== 00:17:37.025 Latency(us) 00:17:37.025 Device Information : IOPS MiB/s Average min max 00:17:37.025 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40005.60 156.27 3199.62 841.93 8241.39 00:17:37.025 ======================================================== 00:17:37.025 Total : 40005.60 156.27 3199.62 841.93 8241.39 00:17:37.025 00:17:37.025 [2024-07-25 07:23:43.952417] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:37.025 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:37.025 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.025 [2024-07-25 07:23:44.124959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:42.357 Initializing NVMe Controllers 00:17:42.357 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.357 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:42.357 Initialization complete. Launching workers. 00:17:42.357 ======================================================== 00:17:42.357 Latency(us) 00:17:42.357 Device Information : IOPS MiB/s Average min max 00:17:42.357 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35303.66 137.90 3625.20 1106.73 7368.44 00:17:42.357 ======================================================== 00:17:42.357 Total : 35303.66 137.90 3625.20 1106.73 7368.44 00:17:42.357 00:17:42.357 [2024-07-25 07:23:49.144273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:42.357 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:42.357 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.357 [2024-07-25 07:23:49.336414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.648 [2024-07-25 07:23:54.471289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.648 Initializing NVMe Controllers 00:17:47.648 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:47.648 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:47.648 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:47.648 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:47.648 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:47.648 Initialization complete. Launching workers. 00:17:47.648 Starting thread on core 2 00:17:47.648 Starting thread on core 3 00:17:47.648 Starting thread on core 1 00:17:47.648 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:47.648 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.648 [2024-07-25 07:23:54.724625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:50.952 [2024-07-25 07:23:57.775347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:50.952 Initializing NVMe Controllers 00:17:50.952 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:50.952 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:50.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:50.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:50.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:50.952 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:50.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:50.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:50.952 Initialization complete. Launching workers. 00:17:50.952 Starting thread on core 1 with urgent priority queue 00:17:50.952 Starting thread on core 2 with urgent priority queue 00:17:50.952 Starting thread on core 3 with urgent priority queue 00:17:50.952 Starting thread on core 0 with urgent priority queue 00:17:50.952 SPDK bdev Controller (SPDK2 ) core 0: 14211.67 IO/s 7.04 secs/100000 ios 00:17:50.952 SPDK bdev Controller (SPDK2 ) core 1: 11750.33 IO/s 8.51 secs/100000 ios 00:17:50.952 SPDK bdev Controller (SPDK2 ) core 2: 8304.67 IO/s 12.04 secs/100000 ios 00:17:50.952 SPDK bdev Controller (SPDK2 ) core 3: 10248.00 IO/s 9.76 secs/100000 ios 00:17:50.952 ======================================================== 00:17:50.952 00:17:50.952 07:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:50.952 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.952 [2024-07-25 07:23:58.037710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:50.952 Initializing NVMe Controllers 00:17:50.952 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:50.952 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:50.952 Namespace ID: 1 size: 0GB 00:17:50.952 Initialization complete. 00:17:50.952 INFO: using host memory buffer for IO 00:17:50.952 Hello world! 00:17:50.952 [2024-07-25 07:23:58.046776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:50.952 07:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:50.952 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.952 [2024-07-25 07:23:58.301457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:52.337 Initializing NVMe Controllers 00:17:52.337 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:52.337 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:52.337 Initialization complete. Launching workers. 00:17:52.337 submit (in ns) avg, min, max = 7683.8, 3968.3, 4000677.5 00:17:52.337 complete (in ns) avg, min, max = 16712.6, 2455.8, 4000035.8 00:17:52.337 00:17:52.337 Submit histogram 00:17:52.337 ================ 00:17:52.337 Range in us Cumulative Count 00:17:52.337 3.947 - 3.973: 0.0209% ( 4) 00:17:52.337 3.973 - 4.000: 1.2304% ( 232) 00:17:52.337 4.000 - 4.027: 5.5631% ( 831) 00:17:52.337 4.027 - 4.053: 14.9479% ( 1800) 00:17:52.337 4.053 - 4.080: 26.0428% ( 2128) 00:17:52.337 4.080 - 4.107: 36.4546% ( 1997) 00:17:52.337 4.107 - 4.133: 46.2096% ( 1871) 00:17:52.337 4.133 - 4.160: 60.8498% ( 2808) 00:17:52.337 4.160 - 4.187: 77.6851% ( 3229) 00:17:52.337 4.187 - 4.213: 89.8905% ( 2341) 00:17:52.337 4.213 - 4.240: 96.4859% ( 1265) 00:17:52.337 4.240 - 4.267: 98.7018% ( 425) 00:17:52.337 4.267 - 4.293: 99.2753% ( 110) 00:17:52.337 4.293 - 4.320: 99.4213% ( 28) 00:17:52.337 4.320 - 4.347: 99.4630% ( 8) 00:17:52.337 4.347 - 4.373: 99.4682% ( 1) 00:17:52.337 4.373 - 4.400: 99.4734% ( 1) 00:17:52.337 4.400 - 4.427: 99.4838% ( 2) 00:17:52.337 4.453 - 4.480: 99.4943% ( 2) 00:17:52.337 4.480 - 4.507: 99.5047% ( 2) 00:17:52.337 4.507 - 4.533: 99.5203% ( 3) 00:17:52.337 4.587 - 4.613: 99.5308% ( 2) 00:17:52.337 4.827 - 4.853: 99.5360% ( 1) 00:17:52.337 4.933 - 4.960: 99.5412% ( 1) 00:17:52.337 4.987 - 5.013: 99.5464% ( 1) 00:17:52.337 5.093 - 5.120: 99.5516% ( 1) 00:17:52.337 5.120 - 5.147: 99.5568% ( 1) 00:17:52.337 5.307 - 5.333: 99.5620% ( 1) 00:17:52.337 5.413 - 5.440: 99.5673% ( 1) 00:17:52.337 5.573 - 5.600: 99.5725% ( 1) 00:17:52.337 5.627 - 5.653: 99.5777% ( 1) 00:17:52.337 6.027 - 6.053: 99.5829% ( 1) 00:17:52.337 6.133 - 6.160: 99.5881% ( 1) 00:17:52.337 6.160 - 6.187: 99.5933% ( 1) 00:17:52.337 6.213 - 6.240: 99.6038% ( 2) 00:17:52.337 6.240 - 6.267: 99.6090% ( 1) 00:17:52.337 6.267 - 6.293: 99.6142% ( 1) 00:17:52.337 6.320 - 6.347: 99.6403% ( 5) 00:17:52.337 6.373 - 6.400: 99.6611% ( 4) 00:17:52.337 6.427 - 6.453: 99.6663% ( 1) 00:17:52.337 6.453 - 6.480: 99.6715% ( 1) 00:17:52.337 6.480 - 6.507: 99.6924% ( 4) 00:17:52.337 6.507 - 6.533: 99.6976% ( 1) 00:17:52.337 6.560 - 6.587: 99.7080% ( 2) 00:17:52.338 6.587 - 6.613: 99.7132% ( 1) 00:17:52.338 6.720 - 6.747: 99.7185% ( 1) 00:17:52.338 6.880 - 6.933: 99.7237% ( 1) 00:17:52.338 6.933 - 6.987: 99.7341% ( 2) 00:17:52.338 7.040 - 7.093: 99.7393% ( 1) 00:17:52.338 7.147 - 7.200: 99.7445% ( 1) 00:17:52.338 7.200 - 7.253: 99.7497% ( 1) 00:17:52.338 7.413 - 7.467: 99.7602% ( 2) 00:17:52.338 7.573 - 7.627: 99.7758% ( 3) 00:17:52.338 7.627 - 7.680: 99.7810% ( 1) 00:17:52.338 7.680 - 7.733: 99.7862% ( 1) 00:17:52.338 7.733 - 7.787: 99.8019% ( 3) 00:17:52.338 7.787 - 7.840: 99.8227% ( 4) 00:17:52.338 7.840 - 7.893: 99.8279% ( 1) 00:17:52.338 7.893 - 7.947: 99.8332% ( 1) 00:17:52.338 8.000 - 8.053: 99.8384% ( 1) 00:17:52.338 8.107 - 8.160: 99.8488% ( 2) 00:17:52.338 8.160 - 8.213: 99.8540% ( 1) 00:17:52.338 8.267 - 8.320: 99.8592% ( 1) 00:17:52.338 8.320 - 8.373: 99.8644% ( 1) 00:17:52.338 8.533 - 8.587: 99.8749% ( 2) 00:17:52.338 8.640 - 8.693: 99.8801% ( 1) 00:17:52.338 8.747 - 8.800: 99.8905% ( 2) 00:17:52.338 9.387 - 9.440: 99.8957% ( 1) 00:17:52.338 9.653 - 9.707: 99.9009% ( 1) 00:17:52.338 13.067 - 13.120: 99.9062% ( 1) 00:17:52.338 13.973 - 14.080: 99.9114% ( 1) 00:17:52.338 3986.773 - 4014.080: 100.0000% ( 17) 00:17:52.338 00:17:52.338 Complete histogram 00:17:52.338 ================== 00:17:52.338 Range in us Cumulative Count 00:17:52.338 2.453 - 2.467: 0.2815% ( 54) 00:17:52.338 2.467 - 2.480: 0.9072% ( 120) 00:17:52.338 2.480 - 2.493: 1.0323% ( 24) 00:17:52.338 2.493 - 2.507: 13.7383% ( 2437) 00:17:52.338 2.507 - 2.520: 50.0469% ( 6964) 00:17:52.338 2.520 - [2024-07-25 07:23:59.397852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:52.338 2.533: 63.0396% ( 2492) 00:17:52.338 2.533 - 2.547: 78.3994% ( 2946) 00:17:52.338 2.547 - 2.560: 81.0428% ( 507) 00:17:52.338 2.560 - 2.573: 82.8884% ( 354) 00:17:52.338 2.573 - 2.587: 86.8874% ( 767) 00:17:52.338 2.587 - 2.600: 92.8676% ( 1147) 00:17:52.338 2.600 - 2.613: 96.2930% ( 657) 00:17:52.338 2.613 - 2.627: 98.5245% ( 428) 00:17:52.338 2.627 - 2.640: 99.1971% ( 129) 00:17:52.338 2.640 - 2.653: 99.3274% ( 25) 00:17:52.338 2.653 - 2.667: 99.3848% ( 11) 00:17:52.338 2.667 - 2.680: 99.3900% ( 1) 00:17:52.338 4.613 - 4.640: 99.4004% ( 2) 00:17:52.338 4.640 - 4.667: 99.4056% ( 1) 00:17:52.338 4.747 - 4.773: 99.4213% ( 3) 00:17:52.338 4.773 - 4.800: 99.4369% ( 3) 00:17:52.338 4.800 - 4.827: 99.4421% ( 1) 00:17:52.338 4.827 - 4.853: 99.4473% ( 1) 00:17:52.338 4.853 - 4.880: 99.4578% ( 2) 00:17:52.338 4.880 - 4.907: 99.4682% ( 2) 00:17:52.338 4.933 - 4.960: 99.4786% ( 2) 00:17:52.338 5.067 - 5.093: 99.4838% ( 1) 00:17:52.338 5.227 - 5.253: 99.4891% ( 1) 00:17:52.338 5.680 - 5.707: 99.4995% ( 2) 00:17:52.338 5.867 - 5.893: 99.5047% ( 1) 00:17:52.338 5.893 - 5.920: 99.5099% ( 1) 00:17:52.338 6.053 - 6.080: 99.5151% ( 1) 00:17:52.338 6.080 - 6.107: 99.5203% ( 1) 00:17:52.338 6.107 - 6.133: 99.5255% ( 1) 00:17:52.338 6.133 - 6.160: 99.5308% ( 1) 00:17:52.338 6.160 - 6.187: 99.5360% ( 1) 00:17:52.338 6.187 - 6.213: 99.5412% ( 1) 00:17:52.338 6.213 - 6.240: 99.5516% ( 2) 00:17:52.338 6.293 - 6.320: 99.5725% ( 4) 00:17:52.338 6.667 - 6.693: 99.5777% ( 1) 00:17:52.338 6.720 - 6.747: 99.5829% ( 1) 00:17:52.338 6.747 - 6.773: 99.5881% ( 1) 00:17:52.338 6.773 - 6.800: 99.5933% ( 1) 00:17:52.338 6.827 - 6.880: 99.5985% ( 1) 00:17:52.338 6.987 - 7.040: 99.6038% ( 1) 00:17:52.338 7.040 - 7.093: 99.6142% ( 2) 00:17:52.338 7.093 - 7.147: 99.6194% ( 1) 00:17:52.338 9.120 - 9.173: 99.6246% ( 1) 00:17:52.338 10.400 - 10.453: 99.6298% ( 1) 00:17:52.338 11.520 - 11.573: 99.6350% ( 1) 00:17:52.338 13.280 - 13.333: 99.6403% ( 1) 00:17:52.338 151.040 - 151.893: 99.6455% ( 1) 00:17:52.338 3986.773 - 4014.080: 100.0000% ( 68) 00:17:52.338 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:52.338 [ 00:17:52.338 { 00:17:52.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:52.338 "subtype": "Discovery", 00:17:52.338 "listen_addresses": [], 00:17:52.338 "allow_any_host": true, 00:17:52.338 "hosts": [] 00:17:52.338 }, 00:17:52.338 { 00:17:52.338 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:52.338 "subtype": "NVMe", 00:17:52.338 "listen_addresses": [ 00:17:52.338 { 00:17:52.338 "trtype": "VFIOUSER", 00:17:52.338 "adrfam": "IPv4", 00:17:52.338 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:52.338 "trsvcid": "0" 00:17:52.338 } 00:17:52.338 ], 00:17:52.338 "allow_any_host": true, 00:17:52.338 "hosts": [], 00:17:52.338 "serial_number": "SPDK1", 00:17:52.338 "model_number": "SPDK bdev Controller", 00:17:52.338 "max_namespaces": 32, 00:17:52.338 "min_cntlid": 1, 00:17:52.338 "max_cntlid": 65519, 00:17:52.338 "namespaces": [ 00:17:52.338 { 00:17:52.338 "nsid": 1, 00:17:52.338 "bdev_name": "Malloc1", 00:17:52.338 "name": "Malloc1", 00:17:52.338 "nguid": "3FF0B9070B844359891105DB75BCEA09", 00:17:52.338 "uuid": "3ff0b907-0b84-4359-8911-05db75bcea09" 00:17:52.338 }, 00:17:52.338 { 00:17:52.338 "nsid": 2, 00:17:52.338 "bdev_name": "Malloc3", 00:17:52.338 "name": "Malloc3", 00:17:52.338 "nguid": "F6FC35D16F674477AE5EA77C0030126C", 00:17:52.338 "uuid": "f6fc35d1-6f67-4477-ae5e-a77c0030126c" 00:17:52.338 } 00:17:52.338 ] 00:17:52.338 }, 00:17:52.338 { 00:17:52.338 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:52.338 "subtype": "NVMe", 00:17:52.338 "listen_addresses": [ 00:17:52.338 { 00:17:52.338 "trtype": "VFIOUSER", 00:17:52.338 "adrfam": "IPv4", 00:17:52.338 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:52.338 "trsvcid": "0" 00:17:52.338 } 00:17:52.338 ], 00:17:52.338 "allow_any_host": true, 00:17:52.338 "hosts": [], 00:17:52.338 "serial_number": "SPDK2", 00:17:52.338 "model_number": "SPDK bdev Controller", 00:17:52.338 "max_namespaces": 32, 00:17:52.338 "min_cntlid": 1, 00:17:52.338 "max_cntlid": 65519, 00:17:52.338 "namespaces": [ 00:17:52.338 { 00:17:52.338 "nsid": 1, 00:17:52.338 "bdev_name": "Malloc2", 00:17:52.338 "name": "Malloc2", 00:17:52.338 "nguid": "42A19C7388F747D7991AEAD867968299", 00:17:52.338 "uuid": "42a19c73-88f7-47d7-991a-ead867968299" 00:17:52.338 } 00:17:52.338 ] 00:17:52.338 } 00:17:52.338 ] 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=76814 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:52.338 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:52.338 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.599 Malloc4 00:17:52.599 [2024-07-25 07:23:59.784218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:52.599 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:52.599 [2024-07-25 07:23:59.956317] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:52.861 07:23:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:52.861 Asynchronous Event Request test 00:17:52.861 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:52.861 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:52.861 Registering asynchronous event callbacks... 00:17:52.861 Starting namespace attribute notice tests for all controllers... 00:17:52.861 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:52.861 aer_cb - Changed Namespace 00:17:52.861 Cleaning up... 00:17:52.861 [ 00:17:52.861 { 00:17:52.861 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:52.861 "subtype": "Discovery", 00:17:52.861 "listen_addresses": [], 00:17:52.861 "allow_any_host": true, 00:17:52.861 "hosts": [] 00:17:52.861 }, 00:17:52.861 { 00:17:52.861 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:52.861 "subtype": "NVMe", 00:17:52.861 "listen_addresses": [ 00:17:52.861 { 00:17:52.861 "trtype": "VFIOUSER", 00:17:52.861 "adrfam": "IPv4", 00:17:52.861 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:52.861 "trsvcid": "0" 00:17:52.861 } 00:17:52.861 ], 00:17:52.861 "allow_any_host": true, 00:17:52.861 "hosts": [], 00:17:52.861 "serial_number": "SPDK1", 00:17:52.861 "model_number": "SPDK bdev Controller", 00:17:52.861 "max_namespaces": 32, 00:17:52.861 "min_cntlid": 1, 00:17:52.861 "max_cntlid": 65519, 00:17:52.861 "namespaces": [ 00:17:52.861 { 00:17:52.861 "nsid": 1, 00:17:52.861 "bdev_name": "Malloc1", 00:17:52.861 "name": "Malloc1", 00:17:52.861 "nguid": "3FF0B9070B844359891105DB75BCEA09", 00:17:52.861 "uuid": "3ff0b907-0b84-4359-8911-05db75bcea09" 00:17:52.861 }, 00:17:52.861 { 00:17:52.861 "nsid": 2, 00:17:52.861 "bdev_name": "Malloc3", 00:17:52.861 "name": "Malloc3", 00:17:52.861 "nguid": "F6FC35D16F674477AE5EA77C0030126C", 00:17:52.861 "uuid": "f6fc35d1-6f67-4477-ae5e-a77c0030126c" 00:17:52.861 } 00:17:52.861 ] 00:17:52.861 }, 00:17:52.861 { 00:17:52.861 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:52.861 "subtype": "NVMe", 00:17:52.861 "listen_addresses": [ 00:17:52.861 { 00:17:52.861 "trtype": "VFIOUSER", 00:17:52.861 "adrfam": "IPv4", 00:17:52.861 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:52.861 "trsvcid": "0" 00:17:52.861 } 00:17:52.861 ], 00:17:52.861 "allow_any_host": true, 00:17:52.861 "hosts": [], 00:17:52.861 "serial_number": "SPDK2", 00:17:52.861 "model_number": "SPDK bdev Controller", 00:17:52.861 "max_namespaces": 32, 00:17:52.861 "min_cntlid": 1, 00:17:52.861 "max_cntlid": 65519, 00:17:52.861 "namespaces": [ 00:17:52.861 { 00:17:52.861 "nsid": 1, 00:17:52.861 "bdev_name": "Malloc2", 00:17:52.861 "name": "Malloc2", 00:17:52.861 "nguid": "42A19C7388F747D7991AEAD867968299", 00:17:52.861 "uuid": "42a19c73-88f7-47d7-991a-ead867968299" 00:17:52.861 }, 00:17:52.861 { 00:17:52.861 "nsid": 2, 00:17:52.861 "bdev_name": "Malloc4", 00:17:52.861 "name": "Malloc4", 00:17:52.861 "nguid": "2F6A4DF6DAAE4C9691CCF36430808223", 00:17:52.861 "uuid": "2f6a4df6-daae-4c96-91cc-f36430808223" 00:17:52.861 } 00:17:52.861 ] 00:17:52.861 } 00:17:52.861 ] 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 76814 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 67853 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 67853 ']' 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 67853 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67853 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67853' 00:17:52.861 killing process with pid 67853 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 67853 00:17:52.861 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 67853 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=76996 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 76996' 00:17:53.123 Process pid: 76996 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 76996 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 76996 ']' 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.123 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:53.123 [2024-07-25 07:24:00.443563] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:53.123 [2024-07-25 07:24:00.444494] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:17:53.123 [2024-07-25 07:24:00.444543] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.123 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.384 [2024-07-25 07:24:00.504428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.384 [2024-07-25 07:24:00.570254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.384 [2024-07-25 07:24:00.570291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.384 [2024-07-25 07:24:00.570298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.384 [2024-07-25 07:24:00.570305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.384 [2024-07-25 07:24:00.570310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.384 [2024-07-25 07:24:00.570450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.384 [2024-07-25 07:24:00.570552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.384 [2024-07-25 07:24:00.570700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.384 [2024-07-25 07:24:00.570701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.384 [2024-07-25 07:24:00.638381] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:53.384 [2024-07-25 07:24:00.638403] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:53.384 [2024-07-25 07:24:00.639557] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:53.384 [2024-07-25 07:24:00.639882] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:53.384 [2024-07-25 07:24:00.639991] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:53.971 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.971 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:53.971 07:24:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:54.914 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:55.175 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:55.175 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:55.175 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:55.175 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:55.175 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:55.175 Malloc1 00:17:55.436 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:55.436 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:55.697 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:55.959 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:55.959 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:55.959 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:55.959 Malloc2 00:17:55.959 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:56.220 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 76996 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 76996 ']' 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 76996 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76996 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76996' 00:17:56.481 killing process with pid 76996 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 76996 00:17:56.481 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 76996 00:17:56.742 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:56.742 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:56.742 00:17:56.742 real 0m50.504s 00:17:56.742 user 3m20.208s 00:17:56.742 sys 0m3.072s 00:17:56.742 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:56.742 07:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:56.742 ************************************ 00:17:56.742 END TEST nvmf_vfio_user 00:17:56.742 ************************************ 00:17:56.742 07:24:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:56.742 07:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:56.742 07:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.742 07:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:56.742 ************************************ 00:17:56.742 START TEST nvmf_vfio_user_nvme_compliance 00:17:56.742 ************************************ 00:17:56.742 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:57.004 * Looking for test storage... 00:17:57.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=77819 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 77819' 00:17:57.004 Process pid: 77819 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 77819 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 77819 ']' 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.004 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:57.004 [2024-07-25 07:24:04.250639] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:17:57.004 [2024-07-25 07:24:04.250719] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.004 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.005 [2024-07-25 07:24:04.315022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.266 [2024-07-25 07:24:04.389716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.266 [2024-07-25 07:24:04.389756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.266 [2024-07-25 07:24:04.389766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.266 [2024-07-25 07:24:04.389775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.266 [2024-07-25 07:24:04.389781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.266 [2024-07-25 07:24:04.389915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.266 [2024-07-25 07:24:04.390047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.266 [2024-07-25 07:24:04.390050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.838 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.838 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:57.838 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.779 malloc0 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:58.779 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.780 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.780 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.780 07:24:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:59.041 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.041 00:17:59.041 00:17:59.041 CUnit - A unit testing framework for C - Version 2.1-3 00:17:59.041 http://cunit.sourceforge.net/ 00:17:59.041 00:17:59.041 00:17:59.041 Suite: nvme_compliance 00:17:59.041 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 07:24:06.263699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.041 [2024-07-25 07:24:06.265047] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:59.041 [2024-07-25 07:24:06.265058] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:59.041 [2024-07-25 07:24:06.265062] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:59.041 [2024-07-25 07:24:06.266722] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.041 passed 00:17:59.041 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 07:24:06.362338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.041 [2024-07-25 07:24:06.365355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.041 passed 00:17:59.302 Test: admin_identify_ns ...[2024-07-25 07:24:06.461455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.302 [2024-07-25 07:24:06.525214] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:59.302 [2024-07-25 07:24:06.533218] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:59.302 [2024-07-25 07:24:06.554330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.302 passed 00:17:59.302 Test: admin_get_features_mandatory_features ...[2024-07-25 07:24:06.644956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.302 [2024-07-25 07:24:06.647974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.563 passed 00:17:59.563 Test: admin_get_features_optional_features ...[2024-07-25 07:24:06.743536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.563 [2024-07-25 07:24:06.746547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.563 passed 00:17:59.563 Test: admin_set_features_number_of_queues ...[2024-07-25 07:24:06.838423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.824 [2024-07-25 07:24:06.943298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.824 passed 00:17:59.824 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 07:24:07.037311] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:59.824 [2024-07-25 07:24:07.040330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:59.824 passed 00:17:59.824 Test: admin_get_log_page_with_lpo ...[2024-07-25 07:24:07.133432] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.084 [2024-07-25 07:24:07.201213] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:00.084 [2024-07-25 07:24:07.214266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.084 passed 00:18:00.084 Test: fabric_property_get ...[2024-07-25 07:24:07.308317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.084 [2024-07-25 07:24:07.309552] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:00.084 [2024-07-25 07:24:07.311344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.084 passed 00:18:00.084 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 07:24:07.404900] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.084 [2024-07-25 07:24:07.406165] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:00.084 [2024-07-25 07:24:07.409924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.084 passed 00:18:00.344 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 07:24:07.502077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.344 [2024-07-25 07:24:07.585210] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:00.344 [2024-07-25 07:24:07.601224] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:00.344 [2024-07-25 07:24:07.606295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.344 passed 00:18:00.344 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 07:24:07.698308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.344 [2024-07-25 07:24:07.699550] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:00.344 [2024-07-25 07:24:07.701320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.604 passed 00:18:00.604 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 07:24:07.796455] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.604 [2024-07-25 07:24:07.872210] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:00.604 [2024-07-25 07:24:07.896209] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:00.604 [2024-07-25 07:24:07.901297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.604 passed 00:18:00.875 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 07:24:07.992915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.875 [2024-07-25 07:24:07.994159] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:00.875 [2024-07-25 07:24:07.994178] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:00.875 [2024-07-25 07:24:07.995929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:00.875 passed 00:18:00.875 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 07:24:08.090138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:00.875 [2024-07-25 07:24:08.179212] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:00.875 [2024-07-25 07:24:08.187207] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:00.875 [2024-07-25 07:24:08.195211] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:00.875 [2024-07-25 07:24:08.203207] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:00.875 [2024-07-25 07:24:08.232291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.176 passed 00:18:01.176 Test: admin_create_io_sq_verify_pc ...[2024-07-25 07:24:08.326306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:01.176 [2024-07-25 07:24:08.345218] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:01.176 [2024-07-25 07:24:08.362468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:01.176 passed 00:18:01.176 Test: admin_create_io_qp_max_qps ...[2024-07-25 07:24:08.450986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:02.561 [2024-07-25 07:24:09.559211] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:02.822 [2024-07-25 07:24:09.938528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:02.822 passed 00:18:02.822 Test: admin_create_io_sq_shared_cq ...[2024-07-25 07:24:10.030248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:02.822 [2024-07-25 07:24:10.167213] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:03.083 [2024-07-25 07:24:10.204275] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:03.083 passed 00:18:03.083 00:18:03.083 Run Summary: Type Total Ran Passed Failed Inactive 00:18:03.083 suites 1 1 n/a 0 0 00:18:03.083 tests 18 18 18 0 0 00:18:03.083 asserts 360 360 360 0 n/a 00:18:03.083 00:18:03.083 Elapsed time = 1.652 seconds 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 77819 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 77819 ']' 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 77819 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77819 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77819' 00:18:03.083 killing process with pid 77819 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 77819 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 77819 00:18:03.083 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:03.346 00:18:03.346 real 0m6.399s 00:18:03.346 user 0m18.286s 00:18:03.346 sys 0m0.455s 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:03.346 ************************************ 00:18:03.346 END TEST nvmf_vfio_user_nvme_compliance 00:18:03.346 ************************************ 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.346 ************************************ 00:18:03.346 START TEST nvmf_vfio_user_fuzz 00:18:03.346 ************************************ 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:03.346 * Looking for test storage... 00:18:03.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.346 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=79606 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 79606' 00:18:03.347 Process pid: 79606 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 79606 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 79606 ']' 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:03.347 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:04.289 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.289 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:04.289 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.231 malloc0 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.231 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:05.232 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:37.350 Fuzzing completed. Shutting down the fuzz application 00:18:37.350 00:18:37.350 Dumping successful admin opcodes: 00:18:37.350 8, 9, 10, 24, 00:18:37.350 Dumping successful io opcodes: 00:18:37.350 0, 00:18:37.350 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1140355, total successful commands: 4493, random_seed: 29807872 00:18:37.350 NS: 0x200003a1ef00 admin qp, Total commands completed: 143460, total successful commands: 1165, random_seed: 1628819968 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 79606 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 79606 ']' 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 79606 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79606 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79606' 00:18:37.350 killing process with pid 79606 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 79606 00:18:37.350 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 79606 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:37.350 00:18:37.350 real 0m33.608s 00:18:37.350 user 0m38.402s 00:18:37.350 sys 0m25.442s 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:37.350 ************************************ 00:18:37.350 END TEST nvmf_vfio_user_fuzz 00:18:37.350 ************************************ 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.350 ************************************ 00:18:37.350 START TEST nvmf_auth_target 00:18:37.350 ************************************ 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:37.350 * Looking for test storage... 00:18:37.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.350 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.351 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.944 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:43.945 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:43.945 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:43.945 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:43.945 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.945 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:43.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.763 ms 00:18:43.945 00:18:43.945 --- 10.0.0.2 ping statistics --- 00:18:43.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.945 rtt min/avg/max/mdev = 0.763/0.763/0.763/0.000 ms 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:18:43.945 00:18:43.945 --- 10.0.0.1 ping statistics --- 00:18:43.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.945 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=89652 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 89652 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 89652 ']' 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.945 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=89998 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c93754e1e23edecb94154e2553779ee8b283e4b3afdd3068 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:44.892 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.qKl 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c93754e1e23edecb94154e2553779ee8b283e4b3afdd3068 0 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c93754e1e23edecb94154e2553779ee8b283e4b3afdd3068 0 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c93754e1e23edecb94154e2553779ee8b283e4b3afdd3068 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.qKl 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.qKl 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.qKl 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=395f9971a3053909dec6458c7cb4cbf8fec3e2a055efb69e3b791456e2b2d8ed 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GWf 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 395f9971a3053909dec6458c7cb4cbf8fec3e2a055efb69e3b791456e2b2d8ed 3 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 395f9971a3053909dec6458c7cb4cbf8fec3e2a055efb69e3b791456e2b2d8ed 3 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=395f9971a3053909dec6458c7cb4cbf8fec3e2a055efb69e3b791456e2b2d8ed 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GWf 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GWf 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.GWf 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8d2efd1308bb7c225918f5275d51e78e 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.29C 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8d2efd1308bb7c225918f5275d51e78e 1 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8d2efd1308bb7c225918f5275d51e78e 1 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8d2efd1308bb7c225918f5275d51e78e 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.29C 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.29C 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.29C 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:44.893 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=59b42a98bc633ab35f80e9fbb1048dc198786fd73948668e 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VvG 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 59b42a98bc633ab35f80e9fbb1048dc198786fd73948668e 2 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 59b42a98bc633ab35f80e9fbb1048dc198786fd73948668e 2 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=59b42a98bc633ab35f80e9fbb1048dc198786fd73948668e 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VvG 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VvG 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.VvG 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=35e448908760c85efa955a46b6df9ee05d868cf8ce453014 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YZs 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 35e448908760c85efa955a46b6df9ee05d868cf8ce453014 2 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 35e448908760c85efa955a46b6df9ee05d868cf8ce453014 2 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=35e448908760c85efa955a46b6df9ee05d868cf8ce453014 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YZs 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YZs 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.YZs 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f0b106cea5782427728dfaf79e8575dc 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.y9Y 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f0b106cea5782427728dfaf79e8575dc 1 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f0b106cea5782427728dfaf79e8575dc 1 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f0b106cea5782427728dfaf79e8575dc 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.y9Y 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.y9Y 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.y9Y 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a316bee3ba731758c4c3da9b957b92987eb162c87dea331302884fb89681654c 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ENy 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a316bee3ba731758c4c3da9b957b92987eb162c87dea331302884fb89681654c 3 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a316bee3ba731758c4c3da9b957b92987eb162c87dea331302884fb89681654c 3 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a316bee3ba731758c4c3da9b957b92987eb162c87dea331302884fb89681654c 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ENy 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ENy 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ENy 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 89652 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 89652 ']' 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.155 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 89998 /var/tmp/host.sock 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 89998 ']' 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:45.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.416 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qKl 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qKl 00:18:45.677 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qKl 00:18:45.677 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.GWf ]] 00:18:45.677 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GWf 00:18:45.677 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.677 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.677 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.677 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GWf 00:18:45.677 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GWf 00:18:45.938 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:45.938 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.29C 00:18:45.938 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.938 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.938 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.938 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.29C 00:18:45.938 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.29C 00:18:46.200 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.VvG ]] 00:18:46.200 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VvG 00:18:46.200 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.200 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.200 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.200 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VvG 00:18:46.200 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VvG 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YZs 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YZs 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YZs 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.y9Y ]] 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y9Y 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y9Y 00:18:46.462 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y9Y 00:18:46.723 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:46.723 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ENy 00:18:46.723 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.723 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.723 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.723 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ENy 00:18:46.723 07:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ENy 00:18:46.723 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:46.723 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:46.723 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.723 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.723 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.723 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.985 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.249 00:18:47.249 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.249 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.249 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.550 { 00:18:47.550 "cntlid": 1, 00:18:47.550 "qid": 0, 00:18:47.550 "state": "enabled", 00:18:47.550 "thread": "nvmf_tgt_poll_group_000", 00:18:47.550 "listen_address": { 00:18:47.550 "trtype": "TCP", 00:18:47.550 "adrfam": "IPv4", 00:18:47.550 "traddr": "10.0.0.2", 00:18:47.550 "trsvcid": "4420" 00:18:47.550 }, 00:18:47.550 "peer_address": { 00:18:47.550 "trtype": "TCP", 00:18:47.550 "adrfam": "IPv4", 00:18:47.550 "traddr": "10.0.0.1", 00:18:47.550 "trsvcid": "33980" 00:18:47.550 }, 00:18:47.550 "auth": { 00:18:47.550 "state": "completed", 00:18:47.550 "digest": "sha256", 00:18:47.550 "dhgroup": "null" 00:18:47.550 } 00:18:47.550 } 00:18:47.550 ]' 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.550 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.812 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.384 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.646 07:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.907 00:18:48.907 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.907 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.907 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.168 { 00:18:49.168 "cntlid": 3, 00:18:49.168 "qid": 0, 00:18:49.168 "state": "enabled", 00:18:49.168 "thread": "nvmf_tgt_poll_group_000", 00:18:49.168 "listen_address": { 00:18:49.168 "trtype": "TCP", 00:18:49.168 "adrfam": "IPv4", 00:18:49.168 "traddr": "10.0.0.2", 00:18:49.168 "trsvcid": "4420" 00:18:49.168 }, 00:18:49.168 "peer_address": { 00:18:49.168 "trtype": "TCP", 00:18:49.168 "adrfam": "IPv4", 00:18:49.168 "traddr": "10.0.0.1", 00:18:49.168 "trsvcid": "34008" 00:18:49.168 }, 00:18:49.168 "auth": { 00:18:49.168 "state": "completed", 00:18:49.168 "digest": "sha256", 00:18:49.168 "dhgroup": "null" 00:18:49.168 } 00:18:49.168 } 00:18:49.168 ]' 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.168 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.430 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:18:50.001 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.002 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.002 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.002 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.002 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.002 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.002 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.002 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.262 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:50.262 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.262 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.263 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.523 00:18:50.523 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.523 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.523 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.784 { 00:18:50.784 "cntlid": 5, 00:18:50.784 "qid": 0, 00:18:50.784 "state": "enabled", 00:18:50.784 "thread": "nvmf_tgt_poll_group_000", 00:18:50.784 "listen_address": { 00:18:50.784 "trtype": "TCP", 00:18:50.784 "adrfam": "IPv4", 00:18:50.784 "traddr": "10.0.0.2", 00:18:50.784 "trsvcid": "4420" 00:18:50.784 }, 00:18:50.784 "peer_address": { 00:18:50.784 "trtype": "TCP", 00:18:50.784 "adrfam": "IPv4", 00:18:50.784 "traddr": "10.0.0.1", 00:18:50.784 "trsvcid": "34032" 00:18:50.784 }, 00:18:50.784 "auth": { 00:18:50.784 "state": "completed", 00:18:50.784 "digest": "sha256", 00:18:50.784 "dhgroup": "null" 00:18:50.784 } 00:18:50.784 } 00:18:50.784 ]' 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.784 07:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.784 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:50.784 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.785 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.785 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.785 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.045 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:18:51.617 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.879 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.140 00:18:52.140 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.140 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.140 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.401 { 00:18:52.401 "cntlid": 7, 00:18:52.401 "qid": 0, 00:18:52.401 "state": "enabled", 00:18:52.401 "thread": "nvmf_tgt_poll_group_000", 00:18:52.401 "listen_address": { 00:18:52.401 "trtype": "TCP", 00:18:52.401 "adrfam": "IPv4", 00:18:52.401 "traddr": "10.0.0.2", 00:18:52.401 "trsvcid": "4420" 00:18:52.401 }, 00:18:52.401 "peer_address": { 00:18:52.401 "trtype": "TCP", 00:18:52.401 "adrfam": "IPv4", 00:18:52.401 "traddr": "10.0.0.1", 00:18:52.401 "trsvcid": "34070" 00:18:52.401 }, 00:18:52.401 "auth": { 00:18:52.401 "state": "completed", 00:18:52.401 "digest": "sha256", 00:18:52.401 "dhgroup": "null" 00:18:52.401 } 00:18:52.401 } 00:18:52.401 ]' 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.401 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.402 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.402 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:52.402 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.402 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.402 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.402 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.663 07:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.235 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.496 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.756 00:18:53.756 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.756 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.756 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.017 { 00:18:54.017 "cntlid": 9, 00:18:54.017 "qid": 0, 00:18:54.017 "state": "enabled", 00:18:54.017 "thread": "nvmf_tgt_poll_group_000", 00:18:54.017 "listen_address": { 00:18:54.017 "trtype": "TCP", 00:18:54.017 "adrfam": "IPv4", 00:18:54.017 "traddr": "10.0.0.2", 00:18:54.017 "trsvcid": "4420" 00:18:54.017 }, 00:18:54.017 "peer_address": { 00:18:54.017 "trtype": "TCP", 00:18:54.017 "adrfam": "IPv4", 00:18:54.017 "traddr": "10.0.0.1", 00:18:54.017 "trsvcid": "34094" 00:18:54.017 }, 00:18:54.017 "auth": { 00:18:54.017 "state": "completed", 00:18:54.017 "digest": "sha256", 00:18:54.017 "dhgroup": "ffdhe2048" 00:18:54.017 } 00:18:54.017 } 00:18:54.017 ]' 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.017 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.277 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:18:54.849 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.849 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.849 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.849 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.849 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.849 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.849 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.110 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.370 00:18:55.370 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.370 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.370 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.631 { 00:18:55.631 "cntlid": 11, 00:18:55.631 "qid": 0, 00:18:55.631 "state": "enabled", 00:18:55.631 "thread": "nvmf_tgt_poll_group_000", 00:18:55.631 "listen_address": { 00:18:55.631 "trtype": "TCP", 00:18:55.631 "adrfam": "IPv4", 00:18:55.631 "traddr": "10.0.0.2", 00:18:55.631 "trsvcid": "4420" 00:18:55.631 }, 00:18:55.631 "peer_address": { 00:18:55.631 "trtype": "TCP", 00:18:55.631 "adrfam": "IPv4", 00:18:55.631 "traddr": "10.0.0.1", 00:18:55.631 "trsvcid": "34136" 00:18:55.631 }, 00:18:55.631 "auth": { 00:18:55.631 "state": "completed", 00:18:55.631 "digest": "sha256", 00:18:55.631 "dhgroup": "ffdhe2048" 00:18:55.631 } 00:18:55.631 } 00:18:55.631 ]' 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.631 07:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.892 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.836 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.836 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.837 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.097 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.097 { 00:18:57.097 "cntlid": 13, 00:18:57.097 "qid": 0, 00:18:57.097 "state": "enabled", 00:18:57.097 "thread": "nvmf_tgt_poll_group_000", 00:18:57.097 "listen_address": { 00:18:57.097 "trtype": "TCP", 00:18:57.097 "adrfam": "IPv4", 00:18:57.097 "traddr": "10.0.0.2", 00:18:57.097 "trsvcid": "4420" 00:18:57.097 }, 00:18:57.097 "peer_address": { 00:18:57.097 "trtype": "TCP", 00:18:57.097 "adrfam": "IPv4", 00:18:57.097 "traddr": "10.0.0.1", 00:18:57.097 "trsvcid": "43208" 00:18:57.097 }, 00:18:57.097 "auth": { 00:18:57.097 "state": "completed", 00:18:57.097 "digest": "sha256", 00:18:57.097 "dhgroup": "ffdhe2048" 00:18:57.097 } 00:18:57.097 } 00:18:57.097 ]' 00:18:57.097 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.358 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.358 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.358 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.358 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.358 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.358 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.358 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.617 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.189 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.450 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.711 00:18:58.711 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.711 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.711 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.971 { 00:18:58.971 "cntlid": 15, 00:18:58.971 "qid": 0, 00:18:58.971 "state": "enabled", 00:18:58.971 "thread": "nvmf_tgt_poll_group_000", 00:18:58.971 "listen_address": { 00:18:58.971 "trtype": "TCP", 00:18:58.971 "adrfam": "IPv4", 00:18:58.971 "traddr": "10.0.0.2", 00:18:58.971 "trsvcid": "4420" 00:18:58.971 }, 00:18:58.971 "peer_address": { 00:18:58.971 "trtype": "TCP", 00:18:58.971 "adrfam": "IPv4", 00:18:58.971 "traddr": "10.0.0.1", 00:18:58.971 "trsvcid": "43238" 00:18:58.971 }, 00:18:58.971 "auth": { 00:18:58.971 "state": "completed", 00:18:58.971 "digest": "sha256", 00:18:58.971 "dhgroup": "ffdhe2048" 00:18:58.971 } 00:18:58.971 } 00:18:58.971 ]' 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.971 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.231 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:59.800 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.061 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.322 00:19:00.322 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.322 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.322 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.582 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.582 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.582 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.582 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.582 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.582 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.582 { 00:19:00.582 "cntlid": 17, 00:19:00.582 "qid": 0, 00:19:00.582 "state": "enabled", 00:19:00.582 "thread": "nvmf_tgt_poll_group_000", 00:19:00.582 "listen_address": { 00:19:00.582 "trtype": "TCP", 00:19:00.582 "adrfam": "IPv4", 00:19:00.582 "traddr": "10.0.0.2", 00:19:00.582 "trsvcid": "4420" 00:19:00.582 }, 00:19:00.582 "peer_address": { 00:19:00.582 "trtype": "TCP", 00:19:00.583 "adrfam": "IPv4", 00:19:00.583 "traddr": "10.0.0.1", 00:19:00.583 "trsvcid": "43268" 00:19:00.583 }, 00:19:00.583 "auth": { 00:19:00.583 "state": "completed", 00:19:00.583 "digest": "sha256", 00:19:00.583 "dhgroup": "ffdhe3072" 00:19:00.583 } 00:19:00.583 } 00:19:00.583 ]' 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.583 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.843 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:01.414 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.415 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.415 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.415 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.710 07:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.013 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.013 { 00:19:02.013 "cntlid": 19, 00:19:02.013 "qid": 0, 00:19:02.013 "state": "enabled", 00:19:02.013 "thread": "nvmf_tgt_poll_group_000", 00:19:02.013 "listen_address": { 00:19:02.013 "trtype": "TCP", 00:19:02.013 "adrfam": "IPv4", 00:19:02.013 "traddr": "10.0.0.2", 00:19:02.013 "trsvcid": "4420" 00:19:02.013 }, 00:19:02.013 "peer_address": { 00:19:02.013 "trtype": "TCP", 00:19:02.013 "adrfam": "IPv4", 00:19:02.013 "traddr": "10.0.0.1", 00:19:02.013 "trsvcid": "43296" 00:19:02.013 }, 00:19:02.013 "auth": { 00:19:02.013 "state": "completed", 00:19:02.013 "digest": "sha256", 00:19:02.013 "dhgroup": "ffdhe3072" 00:19:02.013 } 00:19:02.013 } 00:19:02.013 ]' 00:19:02.013 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.274 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.274 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.274 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.274 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.274 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.274 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.274 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.535 07:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.106 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.366 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.627 00:19:03.627 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.627 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.627 07:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.886 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.886 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.886 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.886 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.886 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.886 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.886 { 00:19:03.886 "cntlid": 21, 00:19:03.886 "qid": 0, 00:19:03.886 "state": "enabled", 00:19:03.887 "thread": "nvmf_tgt_poll_group_000", 00:19:03.887 "listen_address": { 00:19:03.887 "trtype": "TCP", 00:19:03.887 "adrfam": "IPv4", 00:19:03.887 "traddr": "10.0.0.2", 00:19:03.887 "trsvcid": "4420" 00:19:03.887 }, 00:19:03.887 "peer_address": { 00:19:03.887 "trtype": "TCP", 00:19:03.887 "adrfam": "IPv4", 00:19:03.887 "traddr": "10.0.0.1", 00:19:03.887 "trsvcid": "43306" 00:19:03.887 }, 00:19:03.887 "auth": { 00:19:03.887 "state": "completed", 00:19:03.887 "digest": "sha256", 00:19:03.887 "dhgroup": "ffdhe3072" 00:19:03.887 } 00:19:03.887 } 00:19:03.887 ]' 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.887 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.147 07:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:04.717 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.978 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.979 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.239 00:19:05.239 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.239 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.239 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.499 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.499 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.499 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.499 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.499 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.499 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.499 { 00:19:05.499 "cntlid": 23, 00:19:05.499 "qid": 0, 00:19:05.499 "state": "enabled", 00:19:05.499 "thread": "nvmf_tgt_poll_group_000", 00:19:05.499 "listen_address": { 00:19:05.499 "trtype": "TCP", 00:19:05.499 "adrfam": "IPv4", 00:19:05.499 "traddr": "10.0.0.2", 00:19:05.499 "trsvcid": "4420" 00:19:05.499 }, 00:19:05.499 "peer_address": { 00:19:05.499 "trtype": "TCP", 00:19:05.499 "adrfam": "IPv4", 00:19:05.499 "traddr": "10.0.0.1", 00:19:05.500 "trsvcid": "43340" 00:19:05.500 }, 00:19:05.500 "auth": { 00:19:05.500 "state": "completed", 00:19:05.500 "digest": "sha256", 00:19:05.500 "dhgroup": "ffdhe3072" 00:19:05.500 } 00:19:05.500 } 00:19:05.500 ]' 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.500 07:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.760 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.701 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.962 00:19:06.962 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.962 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.962 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.222 { 00:19:07.222 "cntlid": 25, 00:19:07.222 "qid": 0, 00:19:07.222 "state": "enabled", 00:19:07.222 "thread": "nvmf_tgt_poll_group_000", 00:19:07.222 "listen_address": { 00:19:07.222 "trtype": "TCP", 00:19:07.222 "adrfam": "IPv4", 00:19:07.222 "traddr": "10.0.0.2", 00:19:07.222 "trsvcid": "4420" 00:19:07.222 }, 00:19:07.222 "peer_address": { 00:19:07.222 "trtype": "TCP", 00:19:07.222 "adrfam": "IPv4", 00:19:07.222 "traddr": "10.0.0.1", 00:19:07.222 "trsvcid": "45620" 00:19:07.222 }, 00:19:07.222 "auth": { 00:19:07.222 "state": "completed", 00:19:07.222 "digest": "sha256", 00:19:07.222 "dhgroup": "ffdhe4096" 00:19:07.222 } 00:19:07.222 } 00:19:07.222 ]' 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.222 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.223 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.483 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.423 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.424 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.684 00:19:08.684 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.684 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.684 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.944 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.944 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.944 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.944 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.945 { 00:19:08.945 "cntlid": 27, 00:19:08.945 "qid": 0, 00:19:08.945 "state": "enabled", 00:19:08.945 "thread": "nvmf_tgt_poll_group_000", 00:19:08.945 "listen_address": { 00:19:08.945 "trtype": "TCP", 00:19:08.945 "adrfam": "IPv4", 00:19:08.945 "traddr": "10.0.0.2", 00:19:08.945 "trsvcid": "4420" 00:19:08.945 }, 00:19:08.945 "peer_address": { 00:19:08.945 "trtype": "TCP", 00:19:08.945 "adrfam": "IPv4", 00:19:08.945 "traddr": "10.0.0.1", 00:19:08.945 "trsvcid": "45630" 00:19:08.945 }, 00:19:08.945 "auth": { 00:19:08.945 "state": "completed", 00:19:08.945 "digest": "sha256", 00:19:08.945 "dhgroup": "ffdhe4096" 00:19:08.945 } 00:19:08.945 } 00:19:08.945 ]' 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.945 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.205 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:10.146 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.146 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.146 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.146 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.146 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.146 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.146 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.147 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.408 00:19:10.408 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.408 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.408 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.669 { 00:19:10.669 "cntlid": 29, 00:19:10.669 "qid": 0, 00:19:10.669 "state": "enabled", 00:19:10.669 "thread": "nvmf_tgt_poll_group_000", 00:19:10.669 "listen_address": { 00:19:10.669 "trtype": "TCP", 00:19:10.669 "adrfam": "IPv4", 00:19:10.669 "traddr": "10.0.0.2", 00:19:10.669 "trsvcid": "4420" 00:19:10.669 }, 00:19:10.669 "peer_address": { 00:19:10.669 "trtype": "TCP", 00:19:10.669 "adrfam": "IPv4", 00:19:10.669 "traddr": "10.0.0.1", 00:19:10.669 "trsvcid": "45650" 00:19:10.669 }, 00:19:10.669 "auth": { 00:19:10.669 "state": "completed", 00:19:10.669 "digest": "sha256", 00:19:10.669 "dhgroup": "ffdhe4096" 00:19:10.669 } 00:19:10.669 } 00:19:10.669 ]' 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.669 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.930 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.871 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.871 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.131 00:19:12.131 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.131 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.131 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.391 { 00:19:12.391 "cntlid": 31, 00:19:12.391 "qid": 0, 00:19:12.391 "state": "enabled", 00:19:12.391 "thread": "nvmf_tgt_poll_group_000", 00:19:12.391 "listen_address": { 00:19:12.391 "trtype": "TCP", 00:19:12.391 "adrfam": "IPv4", 00:19:12.391 "traddr": "10.0.0.2", 00:19:12.391 "trsvcid": "4420" 00:19:12.391 }, 00:19:12.391 "peer_address": { 00:19:12.391 "trtype": "TCP", 00:19:12.391 "adrfam": "IPv4", 00:19:12.391 "traddr": "10.0.0.1", 00:19:12.391 "trsvcid": "45686" 00:19:12.391 }, 00:19:12.391 "auth": { 00:19:12.391 "state": "completed", 00:19:12.391 "digest": "sha256", 00:19:12.391 "dhgroup": "ffdhe4096" 00:19:12.391 } 00:19:12.391 } 00:19:12.391 ]' 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.391 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.651 07:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:13.222 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.483 07:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.053 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.053 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.053 { 00:19:14.053 "cntlid": 33, 00:19:14.053 "qid": 0, 00:19:14.053 "state": "enabled", 00:19:14.053 "thread": "nvmf_tgt_poll_group_000", 00:19:14.053 "listen_address": { 00:19:14.053 "trtype": "TCP", 00:19:14.053 "adrfam": "IPv4", 00:19:14.053 "traddr": "10.0.0.2", 00:19:14.053 "trsvcid": "4420" 00:19:14.053 }, 00:19:14.053 "peer_address": { 00:19:14.053 "trtype": "TCP", 00:19:14.053 "adrfam": "IPv4", 00:19:14.053 "traddr": "10.0.0.1", 00:19:14.053 "trsvcid": "45710" 00:19:14.053 }, 00:19:14.053 "auth": { 00:19:14.053 "state": "completed", 00:19:14.053 "digest": "sha256", 00:19:14.054 "dhgroup": "ffdhe6144" 00:19:14.054 } 00:19:14.054 } 00:19:14.054 ]' 00:19:14.054 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.054 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.054 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.054 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.314 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.314 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.314 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.314 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.314 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.254 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.826 00:19:15.826 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.826 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.826 07:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.826 { 00:19:15.826 "cntlid": 35, 00:19:15.826 "qid": 0, 00:19:15.826 "state": "enabled", 00:19:15.826 "thread": "nvmf_tgt_poll_group_000", 00:19:15.826 "listen_address": { 00:19:15.826 "trtype": "TCP", 00:19:15.826 "adrfam": "IPv4", 00:19:15.826 "traddr": "10.0.0.2", 00:19:15.826 "trsvcid": "4420" 00:19:15.826 }, 00:19:15.826 "peer_address": { 00:19:15.826 "trtype": "TCP", 00:19:15.826 "adrfam": "IPv4", 00:19:15.826 "traddr": "10.0.0.1", 00:19:15.826 "trsvcid": "45734" 00:19:15.826 }, 00:19:15.826 "auth": { 00:19:15.826 "state": "completed", 00:19:15.826 "digest": "sha256", 00:19:15.826 "dhgroup": "ffdhe6144" 00:19:15.826 } 00:19:15.826 } 00:19:15.826 ]' 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.826 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.086 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.086 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.086 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.086 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.086 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.086 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:17.063 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.064 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.634 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.634 { 00:19:17.634 "cntlid": 37, 00:19:17.634 "qid": 0, 00:19:17.634 "state": "enabled", 00:19:17.634 "thread": "nvmf_tgt_poll_group_000", 00:19:17.634 "listen_address": { 00:19:17.634 "trtype": "TCP", 00:19:17.634 "adrfam": "IPv4", 00:19:17.634 "traddr": "10.0.0.2", 00:19:17.634 "trsvcid": "4420" 00:19:17.634 }, 00:19:17.634 "peer_address": { 00:19:17.634 "trtype": "TCP", 00:19:17.634 "adrfam": "IPv4", 00:19:17.634 "traddr": "10.0.0.1", 00:19:17.634 "trsvcid": "43064" 00:19:17.634 }, 00:19:17.634 "auth": { 00:19:17.634 "state": "completed", 00:19:17.634 "digest": "sha256", 00:19:17.634 "dhgroup": "ffdhe6144" 00:19:17.634 } 00:19:17.634 } 00:19:17.634 ]' 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.634 07:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.895 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.895 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.895 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.895 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.836 07:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:18.836 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.837 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.837 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.837 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.837 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.407 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.407 { 00:19:19.407 "cntlid": 39, 00:19:19.407 "qid": 0, 00:19:19.407 "state": "enabled", 00:19:19.407 "thread": "nvmf_tgt_poll_group_000", 00:19:19.407 "listen_address": { 00:19:19.407 "trtype": "TCP", 00:19:19.407 "adrfam": "IPv4", 00:19:19.407 "traddr": "10.0.0.2", 00:19:19.407 "trsvcid": "4420" 00:19:19.407 }, 00:19:19.407 "peer_address": { 00:19:19.407 "trtype": "TCP", 00:19:19.407 "adrfam": "IPv4", 00:19:19.407 "traddr": "10.0.0.1", 00:19:19.407 "trsvcid": "43086" 00:19:19.407 }, 00:19:19.407 "auth": { 00:19:19.407 "state": "completed", 00:19:19.407 "digest": "sha256", 00:19:19.407 "dhgroup": "ffdhe6144" 00:19:19.407 } 00:19:19.407 } 00:19:19.407 ]' 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.407 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.668 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.668 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.668 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.668 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.668 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.668 07:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.609 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.181 00:19:21.181 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.181 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.181 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.441 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.442 { 00:19:21.442 "cntlid": 41, 00:19:21.442 "qid": 0, 00:19:21.442 "state": "enabled", 00:19:21.442 "thread": "nvmf_tgt_poll_group_000", 00:19:21.442 "listen_address": { 00:19:21.442 "trtype": "TCP", 00:19:21.442 "adrfam": "IPv4", 00:19:21.442 "traddr": "10.0.0.2", 00:19:21.442 "trsvcid": "4420" 00:19:21.442 }, 00:19:21.442 "peer_address": { 00:19:21.442 "trtype": "TCP", 00:19:21.442 "adrfam": "IPv4", 00:19:21.442 "traddr": "10.0.0.1", 00:19:21.442 "trsvcid": "43126" 00:19:21.442 }, 00:19:21.442 "auth": { 00:19:21.442 "state": "completed", 00:19:21.442 "digest": "sha256", 00:19:21.442 "dhgroup": "ffdhe8192" 00:19:21.442 } 00:19:21.442 } 00:19:21.442 ]' 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.442 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.702 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.272 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.532 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.101 00:19:23.101 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.101 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.101 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.362 { 00:19:23.362 "cntlid": 43, 00:19:23.362 "qid": 0, 00:19:23.362 "state": "enabled", 00:19:23.362 "thread": "nvmf_tgt_poll_group_000", 00:19:23.362 "listen_address": { 00:19:23.362 "trtype": "TCP", 00:19:23.362 "adrfam": "IPv4", 00:19:23.362 "traddr": "10.0.0.2", 00:19:23.362 "trsvcid": "4420" 00:19:23.362 }, 00:19:23.362 "peer_address": { 00:19:23.362 "trtype": "TCP", 00:19:23.362 "adrfam": "IPv4", 00:19:23.362 "traddr": "10.0.0.1", 00:19:23.362 "trsvcid": "43152" 00:19:23.362 }, 00:19:23.362 "auth": { 00:19:23.362 "state": "completed", 00:19:23.362 "digest": "sha256", 00:19:23.362 "dhgroup": "ffdhe8192" 00:19:23.362 } 00:19:23.362 } 00:19:23.362 ]' 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.362 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.622 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.193 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.453 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.024 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.024 { 00:19:25.024 "cntlid": 45, 00:19:25.024 "qid": 0, 00:19:25.024 "state": "enabled", 00:19:25.024 "thread": "nvmf_tgt_poll_group_000", 00:19:25.024 "listen_address": { 00:19:25.024 "trtype": "TCP", 00:19:25.024 "adrfam": "IPv4", 00:19:25.024 "traddr": "10.0.0.2", 00:19:25.024 "trsvcid": "4420" 00:19:25.024 }, 00:19:25.024 "peer_address": { 00:19:25.024 "trtype": "TCP", 00:19:25.024 "adrfam": "IPv4", 00:19:25.024 "traddr": "10.0.0.1", 00:19:25.024 "trsvcid": "43182" 00:19:25.024 }, 00:19:25.024 "auth": { 00:19:25.024 "state": "completed", 00:19:25.024 "digest": "sha256", 00:19:25.024 "dhgroup": "ffdhe8192" 00:19:25.024 } 00:19:25.024 } 00:19:25.024 ]' 00:19:25.024 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.285 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.285 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.285 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.285 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.285 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.285 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.285 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.545 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.116 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.376 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.947 00:19:26.947 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.947 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.947 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.947 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.947 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.947 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.947 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.207 { 00:19:27.207 "cntlid": 47, 00:19:27.207 "qid": 0, 00:19:27.207 "state": "enabled", 00:19:27.207 "thread": "nvmf_tgt_poll_group_000", 00:19:27.207 "listen_address": { 00:19:27.207 "trtype": "TCP", 00:19:27.207 "adrfam": "IPv4", 00:19:27.207 "traddr": "10.0.0.2", 00:19:27.207 "trsvcid": "4420" 00:19:27.207 }, 00:19:27.207 "peer_address": { 00:19:27.207 "trtype": "TCP", 00:19:27.207 "adrfam": "IPv4", 00:19:27.207 "traddr": "10.0.0.1", 00:19:27.207 "trsvcid": "43202" 00:19:27.207 }, 00:19:27.207 "auth": { 00:19:27.207 "state": "completed", 00:19:27.207 "digest": "sha256", 00:19:27.207 "dhgroup": "ffdhe8192" 00:19:27.207 } 00:19:27.207 } 00:19:27.207 ]' 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.207 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.208 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.468 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.039 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.299 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:28.299 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.299 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.299 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.299 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.299 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.299 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.300 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.300 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.300 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.300 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.300 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.560 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.560 { 00:19:28.560 "cntlid": 49, 00:19:28.560 "qid": 0, 00:19:28.560 "state": "enabled", 00:19:28.560 "thread": "nvmf_tgt_poll_group_000", 00:19:28.560 "listen_address": { 00:19:28.560 "trtype": "TCP", 00:19:28.560 "adrfam": "IPv4", 00:19:28.560 "traddr": "10.0.0.2", 00:19:28.560 "trsvcid": "4420" 00:19:28.560 }, 00:19:28.560 "peer_address": { 00:19:28.560 "trtype": "TCP", 00:19:28.560 "adrfam": "IPv4", 00:19:28.560 "traddr": "10.0.0.1", 00:19:28.560 "trsvcid": "42544" 00:19:28.560 }, 00:19:28.560 "auth": { 00:19:28.560 "state": "completed", 00:19:28.560 "digest": "sha384", 00:19:28.560 "dhgroup": "null" 00:19:28.560 } 00:19:28.560 } 00:19:28.560 ]' 00:19:28.560 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.820 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.820 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.820 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.820 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.820 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.820 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.820 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.081 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.652 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:29.912 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:29.912 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.912 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:29.912 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.912 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.912 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.913 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.913 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.913 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.913 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.913 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.913 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.173 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.173 { 00:19:30.173 "cntlid": 51, 00:19:30.173 "qid": 0, 00:19:30.173 "state": "enabled", 00:19:30.173 "thread": "nvmf_tgt_poll_group_000", 00:19:30.173 "listen_address": { 00:19:30.173 "trtype": "TCP", 00:19:30.173 "adrfam": "IPv4", 00:19:30.173 "traddr": "10.0.0.2", 00:19:30.173 "trsvcid": "4420" 00:19:30.173 }, 00:19:30.173 "peer_address": { 00:19:30.173 "trtype": "TCP", 00:19:30.173 "adrfam": "IPv4", 00:19:30.173 "traddr": "10.0.0.1", 00:19:30.173 "trsvcid": "42582" 00:19:30.173 }, 00:19:30.173 "auth": { 00:19:30.173 "state": "completed", 00:19:30.173 "digest": "sha384", 00:19:30.173 "dhgroup": "null" 00:19:30.173 } 00:19:30.173 } 00:19:30.173 ]' 00:19:30.173 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.433 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.433 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.433 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:30.433 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.433 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.433 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.433 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.694 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.265 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.526 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.819 00:19:31.819 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.819 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.819 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.819 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.819 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.819 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.819 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.819 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.819 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.819 { 00:19:31.819 "cntlid": 53, 00:19:31.819 "qid": 0, 00:19:31.819 "state": "enabled", 00:19:31.819 "thread": "nvmf_tgt_poll_group_000", 00:19:31.819 "listen_address": { 00:19:31.819 "trtype": "TCP", 00:19:31.819 "adrfam": "IPv4", 00:19:31.819 "traddr": "10.0.0.2", 00:19:31.819 "trsvcid": "4420" 00:19:31.819 }, 00:19:31.819 "peer_address": { 00:19:31.819 "trtype": "TCP", 00:19:31.819 "adrfam": "IPv4", 00:19:31.819 "traddr": "10.0.0.1", 00:19:31.819 "trsvcid": "42604" 00:19:31.819 }, 00:19:31.819 "auth": { 00:19:31.819 "state": "completed", 00:19:31.819 "digest": "sha384", 00:19:31.819 "dhgroup": "null" 00:19:31.819 } 00:19:31.819 } 00:19:31.819 ]' 00:19:31.819 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.080 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.080 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.080 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:32.080 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.080 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.080 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.080 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.341 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.911 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.171 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.432 00:19:33.432 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.432 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.432 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.432 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.432 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.432 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.432 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.692 { 00:19:33.692 "cntlid": 55, 00:19:33.692 "qid": 0, 00:19:33.692 "state": "enabled", 00:19:33.692 "thread": "nvmf_tgt_poll_group_000", 00:19:33.692 "listen_address": { 00:19:33.692 "trtype": "TCP", 00:19:33.692 "adrfam": "IPv4", 00:19:33.692 "traddr": "10.0.0.2", 00:19:33.692 "trsvcid": "4420" 00:19:33.692 }, 00:19:33.692 "peer_address": { 00:19:33.692 "trtype": "TCP", 00:19:33.692 "adrfam": "IPv4", 00:19:33.692 "traddr": "10.0.0.1", 00:19:33.692 "trsvcid": "42636" 00:19:33.692 }, 00:19:33.692 "auth": { 00:19:33.692 "state": "completed", 00:19:33.692 "digest": "sha384", 00:19:33.692 "dhgroup": "null" 00:19:33.692 } 00:19:33.692 } 00:19:33.692 ]' 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.692 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.953 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:34.525 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.785 07:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.047 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.047 { 00:19:35.047 "cntlid": 57, 00:19:35.047 "qid": 0, 00:19:35.047 "state": "enabled", 00:19:35.047 "thread": "nvmf_tgt_poll_group_000", 00:19:35.047 "listen_address": { 00:19:35.047 "trtype": "TCP", 00:19:35.047 "adrfam": "IPv4", 00:19:35.047 "traddr": "10.0.0.2", 00:19:35.047 "trsvcid": "4420" 00:19:35.047 }, 00:19:35.047 "peer_address": { 00:19:35.047 "trtype": "TCP", 00:19:35.047 "adrfam": "IPv4", 00:19:35.047 "traddr": "10.0.0.1", 00:19:35.047 "trsvcid": "42658" 00:19:35.047 }, 00:19:35.047 "auth": { 00:19:35.047 "state": "completed", 00:19:35.047 "digest": "sha384", 00:19:35.047 "dhgroup": "ffdhe2048" 00:19:35.047 } 00:19:35.047 } 00:19:35.047 ]' 00:19:35.047 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.309 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.309 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.309 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.309 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.309 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.309 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.309 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.569 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.147 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.413 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.674 00:19:36.674 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.674 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.674 07:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.674 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.674 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.674 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.674 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.935 { 00:19:36.935 "cntlid": 59, 00:19:36.935 "qid": 0, 00:19:36.935 "state": "enabled", 00:19:36.935 "thread": "nvmf_tgt_poll_group_000", 00:19:36.935 "listen_address": { 00:19:36.935 "trtype": "TCP", 00:19:36.935 "adrfam": "IPv4", 00:19:36.935 "traddr": "10.0.0.2", 00:19:36.935 "trsvcid": "4420" 00:19:36.935 }, 00:19:36.935 "peer_address": { 00:19:36.935 "trtype": "TCP", 00:19:36.935 "adrfam": "IPv4", 00:19:36.935 "traddr": "10.0.0.1", 00:19:36.935 "trsvcid": "42686" 00:19:36.935 }, 00:19:36.935 "auth": { 00:19:36.935 "state": "completed", 00:19:36.935 "digest": "sha384", 00:19:36.935 "dhgroup": "ffdhe2048" 00:19:36.935 } 00:19:36.935 } 00:19:36.935 ]' 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.935 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.196 07:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:37.768 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.768 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.768 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.768 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.768 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.769 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.769 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.769 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.029 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.290 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.290 { 00:19:38.290 "cntlid": 61, 00:19:38.290 "qid": 0, 00:19:38.290 "state": "enabled", 00:19:38.290 "thread": "nvmf_tgt_poll_group_000", 00:19:38.290 "listen_address": { 00:19:38.290 "trtype": "TCP", 00:19:38.290 "adrfam": "IPv4", 00:19:38.290 "traddr": "10.0.0.2", 00:19:38.290 "trsvcid": "4420" 00:19:38.290 }, 00:19:38.290 "peer_address": { 00:19:38.290 "trtype": "TCP", 00:19:38.290 "adrfam": "IPv4", 00:19:38.290 "traddr": "10.0.0.1", 00:19:38.290 "trsvcid": "41070" 00:19:38.290 }, 00:19:38.290 "auth": { 00:19:38.290 "state": "completed", 00:19:38.290 "digest": "sha384", 00:19:38.290 "dhgroup": "ffdhe2048" 00:19:38.290 } 00:19:38.290 } 00:19:38.290 ]' 00:19:38.290 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.551 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.551 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.551 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.551 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.551 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.551 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.551 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.812 07:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.383 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.644 07:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.905 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.905 { 00:19:39.905 "cntlid": 63, 00:19:39.905 "qid": 0, 00:19:39.905 "state": "enabled", 00:19:39.905 "thread": "nvmf_tgt_poll_group_000", 00:19:39.905 "listen_address": { 00:19:39.905 "trtype": "TCP", 00:19:39.905 "adrfam": "IPv4", 00:19:39.905 "traddr": "10.0.0.2", 00:19:39.905 "trsvcid": "4420" 00:19:39.905 }, 00:19:39.905 "peer_address": { 00:19:39.905 "trtype": "TCP", 00:19:39.905 "adrfam": "IPv4", 00:19:39.905 "traddr": "10.0.0.1", 00:19:39.905 "trsvcid": "41104" 00:19:39.905 }, 00:19:39.905 "auth": { 00:19:39.905 "state": "completed", 00:19:39.905 "digest": "sha384", 00:19:39.905 "dhgroup": "ffdhe2048" 00:19:39.905 } 00:19:39.905 } 00:19:39.905 ]' 00:19:39.905 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.166 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.166 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.166 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.166 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.166 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.166 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.166 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.427 07:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.999 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.260 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.521 00:19:41.521 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.521 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.521 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.521 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.782 { 00:19:41.782 "cntlid": 65, 00:19:41.782 "qid": 0, 00:19:41.782 "state": "enabled", 00:19:41.782 "thread": "nvmf_tgt_poll_group_000", 00:19:41.782 "listen_address": { 00:19:41.782 "trtype": "TCP", 00:19:41.782 "adrfam": "IPv4", 00:19:41.782 "traddr": "10.0.0.2", 00:19:41.782 "trsvcid": "4420" 00:19:41.782 }, 00:19:41.782 "peer_address": { 00:19:41.782 "trtype": "TCP", 00:19:41.782 "adrfam": "IPv4", 00:19:41.782 "traddr": "10.0.0.1", 00:19:41.782 "trsvcid": "41130" 00:19:41.782 }, 00:19:41.782 "auth": { 00:19:41.782 "state": "completed", 00:19:41.782 "digest": "sha384", 00:19:41.782 "dhgroup": "ffdhe3072" 00:19:41.782 } 00:19:41.782 } 00:19:41.782 ]' 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.782 07:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.782 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.782 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.782 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.044 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.615 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.876 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.137 00:19:43.137 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.137 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.137 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.398 { 00:19:43.398 "cntlid": 67, 00:19:43.398 "qid": 0, 00:19:43.398 "state": "enabled", 00:19:43.398 "thread": "nvmf_tgt_poll_group_000", 00:19:43.398 "listen_address": { 00:19:43.398 "trtype": "TCP", 00:19:43.398 "adrfam": "IPv4", 00:19:43.398 "traddr": "10.0.0.2", 00:19:43.398 "trsvcid": "4420" 00:19:43.398 }, 00:19:43.398 "peer_address": { 00:19:43.398 "trtype": "TCP", 00:19:43.398 "adrfam": "IPv4", 00:19:43.398 "traddr": "10.0.0.1", 00:19:43.398 "trsvcid": "41164" 00:19:43.398 }, 00:19:43.398 "auth": { 00:19:43.398 "state": "completed", 00:19:43.398 "digest": "sha384", 00:19:43.398 "dhgroup": "ffdhe3072" 00:19:43.398 } 00:19:43.398 } 00:19:43.398 ]' 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.398 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.399 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.399 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.399 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.399 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.399 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.660 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.603 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.865 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.865 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.126 { 00:19:45.126 "cntlid": 69, 00:19:45.126 "qid": 0, 00:19:45.126 "state": "enabled", 00:19:45.126 "thread": "nvmf_tgt_poll_group_000", 00:19:45.126 "listen_address": { 00:19:45.126 "trtype": "TCP", 00:19:45.126 "adrfam": "IPv4", 00:19:45.126 "traddr": "10.0.0.2", 00:19:45.126 "trsvcid": "4420" 00:19:45.126 }, 00:19:45.126 "peer_address": { 00:19:45.126 "trtype": "TCP", 00:19:45.126 "adrfam": "IPv4", 00:19:45.126 "traddr": "10.0.0.1", 00:19:45.126 "trsvcid": "41188" 00:19:45.126 }, 00:19:45.126 "auth": { 00:19:45.126 "state": "completed", 00:19:45.126 "digest": "sha384", 00:19:45.126 "dhgroup": "ffdhe3072" 00:19:45.126 } 00:19:45.126 } 00:19:45.126 ]' 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.126 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.387 07:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.958 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.290 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.550 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.550 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.550 { 00:19:46.550 "cntlid": 71, 00:19:46.550 "qid": 0, 00:19:46.550 "state": "enabled", 00:19:46.550 "thread": "nvmf_tgt_poll_group_000", 00:19:46.550 "listen_address": { 00:19:46.550 "trtype": "TCP", 00:19:46.550 "adrfam": "IPv4", 00:19:46.550 "traddr": "10.0.0.2", 00:19:46.550 "trsvcid": "4420" 00:19:46.550 }, 00:19:46.550 "peer_address": { 00:19:46.550 "trtype": "TCP", 00:19:46.550 "adrfam": "IPv4", 00:19:46.550 "traddr": "10.0.0.1", 00:19:46.550 "trsvcid": "41222" 00:19:46.550 }, 00:19:46.550 "auth": { 00:19:46.550 "state": "completed", 00:19:46.550 "digest": "sha384", 00:19:46.550 "dhgroup": "ffdhe3072" 00:19:46.551 } 00:19:46.551 } 00:19:46.551 ]' 00:19:46.551 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.811 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.811 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.811 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.811 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.811 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.811 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.811 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.812 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:47.754 07:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.754 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.015 00:19:48.015 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.015 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.015 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.275 { 00:19:48.275 "cntlid": 73, 00:19:48.275 "qid": 0, 00:19:48.275 "state": "enabled", 00:19:48.275 "thread": "nvmf_tgt_poll_group_000", 00:19:48.275 "listen_address": { 00:19:48.275 "trtype": "TCP", 00:19:48.275 "adrfam": "IPv4", 00:19:48.275 "traddr": "10.0.0.2", 00:19:48.275 "trsvcid": "4420" 00:19:48.275 }, 00:19:48.275 "peer_address": { 00:19:48.275 "trtype": "TCP", 00:19:48.275 "adrfam": "IPv4", 00:19:48.275 "traddr": "10.0.0.1", 00:19:48.275 "trsvcid": "50508" 00:19:48.275 }, 00:19:48.275 "auth": { 00:19:48.275 "state": "completed", 00:19:48.275 "digest": "sha384", 00:19:48.275 "dhgroup": "ffdhe4096" 00:19:48.275 } 00:19:48.275 } 00:19:48.275 ]' 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.275 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.536 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.536 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.536 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.536 07:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.477 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.736 00:19:49.736 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.736 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.736 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.997 { 00:19:49.997 "cntlid": 75, 00:19:49.997 "qid": 0, 00:19:49.997 "state": "enabled", 00:19:49.997 "thread": "nvmf_tgt_poll_group_000", 00:19:49.997 "listen_address": { 00:19:49.997 "trtype": "TCP", 00:19:49.997 "adrfam": "IPv4", 00:19:49.997 "traddr": "10.0.0.2", 00:19:49.997 "trsvcid": "4420" 00:19:49.997 }, 00:19:49.997 "peer_address": { 00:19:49.997 "trtype": "TCP", 00:19:49.997 "adrfam": "IPv4", 00:19:49.997 "traddr": "10.0.0.1", 00:19:49.997 "trsvcid": "50542" 00:19:49.997 }, 00:19:49.997 "auth": { 00:19:49.997 "state": "completed", 00:19:49.997 "digest": "sha384", 00:19:49.997 "dhgroup": "ffdhe4096" 00:19:49.997 } 00:19:49.997 } 00:19:49.997 ]' 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.997 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.257 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.257 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.257 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.257 07:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.199 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.200 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.460 00:19:51.460 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.460 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.460 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.720 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.721 { 00:19:51.721 "cntlid": 77, 00:19:51.721 "qid": 0, 00:19:51.721 "state": "enabled", 00:19:51.721 "thread": "nvmf_tgt_poll_group_000", 00:19:51.721 "listen_address": { 00:19:51.721 "trtype": "TCP", 00:19:51.721 "adrfam": "IPv4", 00:19:51.721 "traddr": "10.0.0.2", 00:19:51.721 "trsvcid": "4420" 00:19:51.721 }, 00:19:51.721 "peer_address": { 00:19:51.721 "trtype": "TCP", 00:19:51.721 "adrfam": "IPv4", 00:19:51.721 "traddr": "10.0.0.1", 00:19:51.721 "trsvcid": "50558" 00:19:51.721 }, 00:19:51.721 "auth": { 00:19:51.721 "state": "completed", 00:19:51.721 "digest": "sha384", 00:19:51.721 "dhgroup": "ffdhe4096" 00:19:51.721 } 00:19:51.721 } 00:19:51.721 ]' 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.721 07:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.982 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.553 07:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.814 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.074 00:19:53.074 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.074 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.074 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.334 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.335 { 00:19:53.335 "cntlid": 79, 00:19:53.335 "qid": 0, 00:19:53.335 "state": "enabled", 00:19:53.335 "thread": "nvmf_tgt_poll_group_000", 00:19:53.335 "listen_address": { 00:19:53.335 "trtype": "TCP", 00:19:53.335 "adrfam": "IPv4", 00:19:53.335 "traddr": "10.0.0.2", 00:19:53.335 "trsvcid": "4420" 00:19:53.335 }, 00:19:53.335 "peer_address": { 00:19:53.335 "trtype": "TCP", 00:19:53.335 "adrfam": "IPv4", 00:19:53.335 "traddr": "10.0.0.1", 00:19:53.335 "trsvcid": "50588" 00:19:53.335 }, 00:19:53.335 "auth": { 00:19:53.335 "state": "completed", 00:19:53.335 "digest": "sha384", 00:19:53.335 "dhgroup": "ffdhe4096" 00:19:53.335 } 00:19:53.335 } 00:19:53.335 ]' 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.335 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.595 07:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.549 07:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.121 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.121 { 00:19:55.121 "cntlid": 81, 00:19:55.121 "qid": 0, 00:19:55.121 "state": "enabled", 00:19:55.121 "thread": "nvmf_tgt_poll_group_000", 00:19:55.121 "listen_address": { 00:19:55.121 "trtype": "TCP", 00:19:55.121 "adrfam": "IPv4", 00:19:55.121 "traddr": "10.0.0.2", 00:19:55.121 "trsvcid": "4420" 00:19:55.121 }, 00:19:55.121 "peer_address": { 00:19:55.121 "trtype": "TCP", 00:19:55.121 "adrfam": "IPv4", 00:19:55.121 "traddr": "10.0.0.1", 00:19:55.121 "trsvcid": "50610" 00:19:55.121 }, 00:19:55.121 "auth": { 00:19:55.121 "state": "completed", 00:19:55.121 "digest": "sha384", 00:19:55.121 "dhgroup": "ffdhe6144" 00:19:55.121 } 00:19:55.121 } 00:19:55.121 ]' 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.121 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.383 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.383 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.383 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.383 07:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.325 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.585 00:19:56.846 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.846 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.846 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.846 { 00:19:56.846 "cntlid": 83, 00:19:56.846 "qid": 0, 00:19:56.846 "state": "enabled", 00:19:56.846 "thread": "nvmf_tgt_poll_group_000", 00:19:56.846 "listen_address": { 00:19:56.846 "trtype": "TCP", 00:19:56.846 "adrfam": "IPv4", 00:19:56.846 "traddr": "10.0.0.2", 00:19:56.846 "trsvcid": "4420" 00:19:56.846 }, 00:19:56.846 "peer_address": { 00:19:56.846 "trtype": "TCP", 00:19:56.846 "adrfam": "IPv4", 00:19:56.846 "traddr": "10.0.0.1", 00:19:56.846 "trsvcid": "50644" 00:19:56.846 }, 00:19:56.846 "auth": { 00:19:56.846 "state": "completed", 00:19:56.846 "digest": "sha384", 00:19:56.846 "dhgroup": "ffdhe6144" 00:19:56.846 } 00:19:56.846 } 00:19:56.846 ]' 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.846 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.107 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.107 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.107 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.107 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.107 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.107 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:19:58.048 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.048 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:58.048 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.048 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.048 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.048 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.049 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.619 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.619 { 00:19:58.619 "cntlid": 85, 00:19:58.619 "qid": 0, 00:19:58.619 "state": "enabled", 00:19:58.619 "thread": "nvmf_tgt_poll_group_000", 00:19:58.619 "listen_address": { 00:19:58.619 "trtype": "TCP", 00:19:58.619 "adrfam": "IPv4", 00:19:58.619 "traddr": "10.0.0.2", 00:19:58.619 "trsvcid": "4420" 00:19:58.619 }, 00:19:58.619 "peer_address": { 00:19:58.619 "trtype": "TCP", 00:19:58.619 "adrfam": "IPv4", 00:19:58.619 "traddr": "10.0.0.1", 00:19:58.619 "trsvcid": "51020" 00:19:58.619 }, 00:19:58.619 "auth": { 00:19:58.619 "state": "completed", 00:19:58.619 "digest": "sha384", 00:19:58.619 "dhgroup": "ffdhe6144" 00:19:58.619 } 00:19:58.619 } 00:19:58.619 ]' 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.619 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.880 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.880 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.880 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.880 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.880 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.880 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.822 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.822 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.393 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.393 { 00:20:00.393 "cntlid": 87, 00:20:00.393 "qid": 0, 00:20:00.393 "state": "enabled", 00:20:00.393 "thread": "nvmf_tgt_poll_group_000", 00:20:00.393 "listen_address": { 00:20:00.393 "trtype": "TCP", 00:20:00.393 "adrfam": "IPv4", 00:20:00.393 "traddr": "10.0.0.2", 00:20:00.393 "trsvcid": "4420" 00:20:00.393 }, 00:20:00.393 "peer_address": { 00:20:00.393 "trtype": "TCP", 00:20:00.393 "adrfam": "IPv4", 00:20:00.393 "traddr": "10.0.0.1", 00:20:00.393 "trsvcid": "51044" 00:20:00.393 }, 00:20:00.393 "auth": { 00:20:00.393 "state": "completed", 00:20:00.393 "digest": "sha384", 00:20:00.393 "dhgroup": "ffdhe6144" 00:20:00.393 } 00:20:00.393 } 00:20:00.393 ]' 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.393 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.654 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.654 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.654 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.654 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.654 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.654 07:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.608 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.235 00:20:02.235 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.235 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.235 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.235 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.235 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.235 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.235 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.496 { 00:20:02.496 "cntlid": 89, 00:20:02.496 "qid": 0, 00:20:02.496 "state": "enabled", 00:20:02.496 "thread": "nvmf_tgt_poll_group_000", 00:20:02.496 "listen_address": { 00:20:02.496 "trtype": "TCP", 00:20:02.496 "adrfam": "IPv4", 00:20:02.496 "traddr": "10.0.0.2", 00:20:02.496 "trsvcid": "4420" 00:20:02.496 }, 00:20:02.496 "peer_address": { 00:20:02.496 "trtype": "TCP", 00:20:02.496 "adrfam": "IPv4", 00:20:02.496 "traddr": "10.0.0.1", 00:20:02.496 "trsvcid": "51088" 00:20:02.496 }, 00:20:02.496 "auth": { 00:20:02.496 "state": "completed", 00:20:02.496 "digest": "sha384", 00:20:02.496 "dhgroup": "ffdhe8192" 00:20:02.496 } 00:20:02.496 } 00:20:02.496 ]' 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.496 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.757 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.329 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.589 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.590 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.161 00:20:04.161 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.161 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.161 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.422 { 00:20:04.422 "cntlid": 91, 00:20:04.422 "qid": 0, 00:20:04.422 "state": "enabled", 00:20:04.422 "thread": "nvmf_tgt_poll_group_000", 00:20:04.422 "listen_address": { 00:20:04.422 "trtype": "TCP", 00:20:04.422 "adrfam": "IPv4", 00:20:04.422 "traddr": "10.0.0.2", 00:20:04.422 "trsvcid": "4420" 00:20:04.422 }, 00:20:04.422 "peer_address": { 00:20:04.422 "trtype": "TCP", 00:20:04.422 "adrfam": "IPv4", 00:20:04.422 "traddr": "10.0.0.1", 00:20:04.422 "trsvcid": "51112" 00:20:04.422 }, 00:20:04.422 "auth": { 00:20:04.422 "state": "completed", 00:20:04.422 "digest": "sha384", 00:20:04.422 "dhgroup": "ffdhe8192" 00:20:04.422 } 00:20:04.422 } 00:20:04.422 ]' 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.422 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.255 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.516 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.088 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.088 { 00:20:06.088 "cntlid": 93, 00:20:06.088 "qid": 0, 00:20:06.088 "state": "enabled", 00:20:06.088 "thread": "nvmf_tgt_poll_group_000", 00:20:06.088 "listen_address": { 00:20:06.088 "trtype": "TCP", 00:20:06.088 "adrfam": "IPv4", 00:20:06.088 "traddr": "10.0.0.2", 00:20:06.088 "trsvcid": "4420" 00:20:06.088 }, 00:20:06.088 "peer_address": { 00:20:06.088 "trtype": "TCP", 00:20:06.088 "adrfam": "IPv4", 00:20:06.088 "traddr": "10.0.0.1", 00:20:06.088 "trsvcid": "51144" 00:20:06.088 }, 00:20:06.088 "auth": { 00:20:06.088 "state": "completed", 00:20:06.088 "digest": "sha384", 00:20:06.088 "dhgroup": "ffdhe8192" 00:20:06.088 } 00:20:06.088 } 00:20:06.088 ]' 00:20:06.088 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.348 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.348 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.348 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.348 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.348 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.348 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.348 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.608 07:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.178 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.440 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.013 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.013 { 00:20:08.013 "cntlid": 95, 00:20:08.013 "qid": 0, 00:20:08.013 "state": "enabled", 00:20:08.013 "thread": "nvmf_tgt_poll_group_000", 00:20:08.013 "listen_address": { 00:20:08.013 "trtype": "TCP", 00:20:08.013 "adrfam": "IPv4", 00:20:08.013 "traddr": "10.0.0.2", 00:20:08.013 "trsvcid": "4420" 00:20:08.013 }, 00:20:08.013 "peer_address": { 00:20:08.013 "trtype": "TCP", 00:20:08.013 "adrfam": "IPv4", 00:20:08.013 "traddr": "10.0.0.1", 00:20:08.013 "trsvcid": "53916" 00:20:08.013 }, 00:20:08.013 "auth": { 00:20:08.013 "state": "completed", 00:20:08.013 "digest": "sha384", 00:20:08.013 "dhgroup": "ffdhe8192" 00:20:08.013 } 00:20:08.013 } 00:20:08.013 ]' 00:20:08.013 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.273 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.273 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.273 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.273 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.273 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.273 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.273 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.274 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:09.214 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.214 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.214 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.214 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.214 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.214 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:09.214 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.215 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.477 00:20:09.477 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.477 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.477 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.740 { 00:20:09.740 "cntlid": 97, 00:20:09.740 "qid": 0, 00:20:09.740 "state": "enabled", 00:20:09.740 "thread": "nvmf_tgt_poll_group_000", 00:20:09.740 "listen_address": { 00:20:09.740 "trtype": "TCP", 00:20:09.740 "adrfam": "IPv4", 00:20:09.740 "traddr": "10.0.0.2", 00:20:09.740 "trsvcid": "4420" 00:20:09.740 }, 00:20:09.740 "peer_address": { 00:20:09.740 "trtype": "TCP", 00:20:09.740 "adrfam": "IPv4", 00:20:09.740 "traddr": "10.0.0.1", 00:20:09.740 "trsvcid": "53942" 00:20:09.740 }, 00:20:09.740 "auth": { 00:20:09.740 "state": "completed", 00:20:09.740 "digest": "sha512", 00:20:09.740 "dhgroup": "null" 00:20:09.740 } 00:20:09.740 } 00:20:09.740 ]' 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.740 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.740 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:09.740 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.740 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.740 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.740 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.001 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:10.942 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.942 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.942 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.942 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.942 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.942 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.942 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:10.942 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.943 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.203 00:20:11.203 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.203 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.203 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.203 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.203 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.203 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.203 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.464 { 00:20:11.464 "cntlid": 99, 00:20:11.464 "qid": 0, 00:20:11.464 "state": "enabled", 00:20:11.464 "thread": "nvmf_tgt_poll_group_000", 00:20:11.464 "listen_address": { 00:20:11.464 "trtype": "TCP", 00:20:11.464 "adrfam": "IPv4", 00:20:11.464 "traddr": "10.0.0.2", 00:20:11.464 "trsvcid": "4420" 00:20:11.464 }, 00:20:11.464 "peer_address": { 00:20:11.464 "trtype": "TCP", 00:20:11.464 "adrfam": "IPv4", 00:20:11.464 "traddr": "10.0.0.1", 00:20:11.464 "trsvcid": "53968" 00:20:11.464 }, 00:20:11.464 "auth": { 00:20:11.464 "state": "completed", 00:20:11.464 "digest": "sha512", 00:20:11.464 "dhgroup": "null" 00:20:11.464 } 00:20:11.464 } 00:20:11.464 ]' 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.464 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:20:12.406 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.406 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.406 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.407 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.407 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.407 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.407 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.407 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.667 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.927 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.927 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.927 { 00:20:12.927 "cntlid": 101, 00:20:12.927 "qid": 0, 00:20:12.927 "state": "enabled", 00:20:12.927 "thread": "nvmf_tgt_poll_group_000", 00:20:12.927 "listen_address": { 00:20:12.927 "trtype": "TCP", 00:20:12.927 "adrfam": "IPv4", 00:20:12.927 "traddr": "10.0.0.2", 00:20:12.927 "trsvcid": "4420" 00:20:12.927 }, 00:20:12.927 "peer_address": { 00:20:12.927 "trtype": "TCP", 00:20:12.927 "adrfam": "IPv4", 00:20:12.927 "traddr": "10.0.0.1", 00:20:12.927 "trsvcid": "53994" 00:20:12.927 }, 00:20:12.927 "auth": { 00:20:12.927 "state": "completed", 00:20:12.927 "digest": "sha512", 00:20:12.927 "dhgroup": "null" 00:20:12.927 } 00:20:12.928 } 00:20:12.928 ]' 00:20:12.928 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.188 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.188 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.188 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:13.188 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.188 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.188 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.188 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.448 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.019 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.280 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.540 00:20:14.540 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.540 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.540 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.801 { 00:20:14.801 "cntlid": 103, 00:20:14.801 "qid": 0, 00:20:14.801 "state": "enabled", 00:20:14.801 "thread": "nvmf_tgt_poll_group_000", 00:20:14.801 "listen_address": { 00:20:14.801 "trtype": "TCP", 00:20:14.801 "adrfam": "IPv4", 00:20:14.801 "traddr": "10.0.0.2", 00:20:14.801 "trsvcid": "4420" 00:20:14.801 }, 00:20:14.801 "peer_address": { 00:20:14.801 "trtype": "TCP", 00:20:14.801 "adrfam": "IPv4", 00:20:14.801 "traddr": "10.0.0.1", 00:20:14.801 "trsvcid": "54022" 00:20:14.801 }, 00:20:14.801 "auth": { 00:20:14.801 "state": "completed", 00:20:14.801 "digest": "sha512", 00:20:14.801 "dhgroup": "null" 00:20:14.801 } 00:20:14.801 } 00:20:14.801 ]' 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.801 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.801 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:14.801 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.801 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.801 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.801 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.061 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:15.632 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.893 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.162 00:20:16.162 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.162 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.162 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.435 { 00:20:16.435 "cntlid": 105, 00:20:16.435 "qid": 0, 00:20:16.435 "state": "enabled", 00:20:16.435 "thread": "nvmf_tgt_poll_group_000", 00:20:16.435 "listen_address": { 00:20:16.435 "trtype": "TCP", 00:20:16.435 "adrfam": "IPv4", 00:20:16.435 "traddr": "10.0.0.2", 00:20:16.435 "trsvcid": "4420" 00:20:16.435 }, 00:20:16.435 "peer_address": { 00:20:16.435 "trtype": "TCP", 00:20:16.435 "adrfam": "IPv4", 00:20:16.435 "traddr": "10.0.0.1", 00:20:16.435 "trsvcid": "54030" 00:20:16.435 }, 00:20:16.435 "auth": { 00:20:16.435 "state": "completed", 00:20:16.435 "digest": "sha512", 00:20:16.435 "dhgroup": "ffdhe2048" 00:20:16.435 } 00:20:16.435 } 00:20:16.435 ]' 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.435 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.436 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.436 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.436 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.436 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.754 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:17.325 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.585 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.846 00:20:17.846 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.846 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.846 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.107 { 00:20:18.107 "cntlid": 107, 00:20:18.107 "qid": 0, 00:20:18.107 "state": "enabled", 00:20:18.107 "thread": "nvmf_tgt_poll_group_000", 00:20:18.107 "listen_address": { 00:20:18.107 "trtype": "TCP", 00:20:18.107 "adrfam": "IPv4", 00:20:18.107 "traddr": "10.0.0.2", 00:20:18.107 "trsvcid": "4420" 00:20:18.107 }, 00:20:18.107 "peer_address": { 00:20:18.107 "trtype": "TCP", 00:20:18.107 "adrfam": "IPv4", 00:20:18.107 "traddr": "10.0.0.1", 00:20:18.107 "trsvcid": "35068" 00:20:18.107 }, 00:20:18.107 "auth": { 00:20:18.107 "state": "completed", 00:20:18.107 "digest": "sha512", 00:20:18.107 "dhgroup": "ffdhe2048" 00:20:18.107 } 00:20:18.107 } 00:20:18.107 ]' 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.107 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.368 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:20:19.312 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.313 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.574 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.574 { 00:20:19.574 "cntlid": 109, 00:20:19.574 "qid": 0, 00:20:19.574 "state": "enabled", 00:20:19.574 "thread": "nvmf_tgt_poll_group_000", 00:20:19.574 "listen_address": { 00:20:19.574 "trtype": "TCP", 00:20:19.574 "adrfam": "IPv4", 00:20:19.574 "traddr": "10.0.0.2", 00:20:19.574 "trsvcid": "4420" 00:20:19.574 }, 00:20:19.574 "peer_address": { 00:20:19.574 "trtype": "TCP", 00:20:19.574 "adrfam": "IPv4", 00:20:19.574 "traddr": "10.0.0.1", 00:20:19.574 "trsvcid": "35110" 00:20:19.574 }, 00:20:19.574 "auth": { 00:20:19.574 "state": "completed", 00:20:19.574 "digest": "sha512", 00:20:19.574 "dhgroup": "ffdhe2048" 00:20:19.574 } 00:20:19.574 } 00:20:19.574 ]' 00:20:19.574 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.835 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.835 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.835 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.835 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.835 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.835 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.835 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.835 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.778 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.778 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.039 00:20:21.039 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.039 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.039 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.302 { 00:20:21.302 "cntlid": 111, 00:20:21.302 "qid": 0, 00:20:21.302 "state": "enabled", 00:20:21.302 "thread": "nvmf_tgt_poll_group_000", 00:20:21.302 "listen_address": { 00:20:21.302 "trtype": "TCP", 00:20:21.302 "adrfam": "IPv4", 00:20:21.302 "traddr": "10.0.0.2", 00:20:21.302 "trsvcid": "4420" 00:20:21.302 }, 00:20:21.302 "peer_address": { 00:20:21.302 "trtype": "TCP", 00:20:21.302 "adrfam": "IPv4", 00:20:21.302 "traddr": "10.0.0.1", 00:20:21.302 "trsvcid": "35146" 00:20:21.302 }, 00:20:21.302 "auth": { 00:20:21.302 "state": "completed", 00:20:21.302 "digest": "sha512", 00:20:21.302 "dhgroup": "ffdhe2048" 00:20:21.302 } 00:20:21.302 } 00:20:21.302 ]' 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.302 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.564 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.564 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.564 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.564 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.508 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.769 00:20:22.769 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.769 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.769 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.030 { 00:20:23.030 "cntlid": 113, 00:20:23.030 "qid": 0, 00:20:23.030 "state": "enabled", 00:20:23.030 "thread": "nvmf_tgt_poll_group_000", 00:20:23.030 "listen_address": { 00:20:23.030 "trtype": "TCP", 00:20:23.030 "adrfam": "IPv4", 00:20:23.030 "traddr": "10.0.0.2", 00:20:23.030 "trsvcid": "4420" 00:20:23.030 }, 00:20:23.030 "peer_address": { 00:20:23.030 "trtype": "TCP", 00:20:23.030 "adrfam": "IPv4", 00:20:23.030 "traddr": "10.0.0.1", 00:20:23.030 "trsvcid": "35168" 00:20:23.030 }, 00:20:23.030 "auth": { 00:20:23.030 "state": "completed", 00:20:23.030 "digest": "sha512", 00:20:23.030 "dhgroup": "ffdhe3072" 00:20:23.030 } 00:20:23.030 } 00:20:23.030 ]' 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.030 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.290 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.233 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.494 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.494 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.755 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.755 { 00:20:24.755 "cntlid": 115, 00:20:24.755 "qid": 0, 00:20:24.755 "state": "enabled", 00:20:24.755 "thread": "nvmf_tgt_poll_group_000", 00:20:24.755 "listen_address": { 00:20:24.755 "trtype": "TCP", 00:20:24.755 "adrfam": "IPv4", 00:20:24.755 "traddr": "10.0.0.2", 00:20:24.755 "trsvcid": "4420" 00:20:24.755 }, 00:20:24.755 "peer_address": { 00:20:24.755 "trtype": "TCP", 00:20:24.755 "adrfam": "IPv4", 00:20:24.755 "traddr": "10.0.0.1", 00:20:24.755 "trsvcid": "35192" 00:20:24.755 }, 00:20:24.755 "auth": { 00:20:24.755 "state": "completed", 00:20:24.755 "digest": "sha512", 00:20:24.755 "dhgroup": "ffdhe3072" 00:20:24.755 } 00:20:24.756 } 00:20:24.756 ]' 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.756 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.017 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:25.589 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.848 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.849 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.115 00:20:26.115 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.115 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.115 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.378 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.378 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.378 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.378 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.378 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.378 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.378 { 00:20:26.378 "cntlid": 117, 00:20:26.378 "qid": 0, 00:20:26.378 "state": "enabled", 00:20:26.378 "thread": "nvmf_tgt_poll_group_000", 00:20:26.378 "listen_address": { 00:20:26.378 "trtype": "TCP", 00:20:26.378 "adrfam": "IPv4", 00:20:26.378 "traddr": "10.0.0.2", 00:20:26.378 "trsvcid": "4420" 00:20:26.378 }, 00:20:26.378 "peer_address": { 00:20:26.378 "trtype": "TCP", 00:20:26.378 "adrfam": "IPv4", 00:20:26.378 "traddr": "10.0.0.1", 00:20:26.378 "trsvcid": "35226" 00:20:26.378 }, 00:20:26.378 "auth": { 00:20:26.378 "state": "completed", 00:20:26.379 "digest": "sha512", 00:20:26.379 "dhgroup": "ffdhe3072" 00:20:26.379 } 00:20:26.379 } 00:20:26.379 ]' 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.379 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.639 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:20:27.210 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.470 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.729 00:20:27.729 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.729 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.729 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.989 { 00:20:27.989 "cntlid": 119, 00:20:27.989 "qid": 0, 00:20:27.989 "state": "enabled", 00:20:27.989 "thread": "nvmf_tgt_poll_group_000", 00:20:27.989 "listen_address": { 00:20:27.989 "trtype": "TCP", 00:20:27.989 "adrfam": "IPv4", 00:20:27.989 "traddr": "10.0.0.2", 00:20:27.989 "trsvcid": "4420" 00:20:27.989 }, 00:20:27.989 "peer_address": { 00:20:27.989 "trtype": "TCP", 00:20:27.989 "adrfam": "IPv4", 00:20:27.989 "traddr": "10.0.0.1", 00:20:27.989 "trsvcid": "37874" 00:20:27.989 }, 00:20:27.989 "auth": { 00:20:27.989 "state": "completed", 00:20:27.989 "digest": "sha512", 00:20:27.989 "dhgroup": "ffdhe3072" 00:20:27.989 } 00:20:27.989 } 00:20:27.989 ]' 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.989 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.249 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.187 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.447 00:20:29.447 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.447 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.447 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.706 { 00:20:29.706 "cntlid": 121, 00:20:29.706 "qid": 0, 00:20:29.706 "state": "enabled", 00:20:29.706 "thread": "nvmf_tgt_poll_group_000", 00:20:29.706 "listen_address": { 00:20:29.706 "trtype": "TCP", 00:20:29.706 "adrfam": "IPv4", 00:20:29.706 "traddr": "10.0.0.2", 00:20:29.706 "trsvcid": "4420" 00:20:29.706 }, 00:20:29.706 "peer_address": { 00:20:29.706 "trtype": "TCP", 00:20:29.706 "adrfam": "IPv4", 00:20:29.706 "traddr": "10.0.0.1", 00:20:29.706 "trsvcid": "37912" 00:20:29.706 }, 00:20:29.706 "auth": { 00:20:29.706 "state": "completed", 00:20:29.706 "digest": "sha512", 00:20:29.706 "dhgroup": "ffdhe4096" 00:20:29.706 } 00:20:29.706 } 00:20:29.706 ]' 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.706 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.706 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.706 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.707 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.966 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:30.908 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.908 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.169 00:20:31.169 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.169 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.169 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.433 { 00:20:31.433 "cntlid": 123, 00:20:31.433 "qid": 0, 00:20:31.433 "state": "enabled", 00:20:31.433 "thread": "nvmf_tgt_poll_group_000", 00:20:31.433 "listen_address": { 00:20:31.433 "trtype": "TCP", 00:20:31.433 "adrfam": "IPv4", 00:20:31.433 "traddr": "10.0.0.2", 00:20:31.433 "trsvcid": "4420" 00:20:31.433 }, 00:20:31.433 "peer_address": { 00:20:31.433 "trtype": "TCP", 00:20:31.433 "adrfam": "IPv4", 00:20:31.433 "traddr": "10.0.0.1", 00:20:31.433 "trsvcid": "37948" 00:20:31.433 }, 00:20:31.433 "auth": { 00:20:31.433 "state": "completed", 00:20:31.433 "digest": "sha512", 00:20:31.433 "dhgroup": "ffdhe4096" 00:20:31.433 } 00:20:31.433 } 00:20:31.433 ]' 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.433 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.739 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.310 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.575 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.836 00:20:32.836 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.836 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.836 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.096 { 00:20:33.096 "cntlid": 125, 00:20:33.096 "qid": 0, 00:20:33.096 "state": "enabled", 00:20:33.096 "thread": "nvmf_tgt_poll_group_000", 00:20:33.096 "listen_address": { 00:20:33.096 "trtype": "TCP", 00:20:33.096 "adrfam": "IPv4", 00:20:33.096 "traddr": "10.0.0.2", 00:20:33.096 "trsvcid": "4420" 00:20:33.096 }, 00:20:33.096 "peer_address": { 00:20:33.096 "trtype": "TCP", 00:20:33.096 "adrfam": "IPv4", 00:20:33.096 "traddr": "10.0.0.1", 00:20:33.096 "trsvcid": "37970" 00:20:33.096 }, 00:20:33.096 "auth": { 00:20:33.096 "state": "completed", 00:20:33.096 "digest": "sha512", 00:20:33.096 "dhgroup": "ffdhe4096" 00:20:33.096 } 00:20:33.096 } 00:20:33.096 ]' 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.096 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.358 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:20:33.929 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.929 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.929 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.930 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.930 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.930 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.930 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:33.930 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.191 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.451 00:20:34.451 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.451 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.451 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.712 { 00:20:34.712 "cntlid": 127, 00:20:34.712 "qid": 0, 00:20:34.712 "state": "enabled", 00:20:34.712 "thread": "nvmf_tgt_poll_group_000", 00:20:34.712 "listen_address": { 00:20:34.712 "trtype": "TCP", 00:20:34.712 "adrfam": "IPv4", 00:20:34.712 "traddr": "10.0.0.2", 00:20:34.712 "trsvcid": "4420" 00:20:34.712 }, 00:20:34.712 "peer_address": { 00:20:34.712 "trtype": "TCP", 00:20:34.712 "adrfam": "IPv4", 00:20:34.712 "traddr": "10.0.0.1", 00:20:34.712 "trsvcid": "37980" 00:20:34.712 }, 00:20:34.712 "auth": { 00:20:34.712 "state": "completed", 00:20:34.712 "digest": "sha512", 00:20:34.712 "dhgroup": "ffdhe4096" 00:20:34.712 } 00:20:34.712 } 00:20:34.712 ]' 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.712 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.712 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.712 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.712 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.972 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:35.914 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.914 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.175 00:20:36.175 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.175 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.175 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.435 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.435 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.435 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.435 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.435 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.436 { 00:20:36.436 "cntlid": 129, 00:20:36.436 "qid": 0, 00:20:36.436 "state": "enabled", 00:20:36.436 "thread": "nvmf_tgt_poll_group_000", 00:20:36.436 "listen_address": { 00:20:36.436 "trtype": "TCP", 00:20:36.436 "adrfam": "IPv4", 00:20:36.436 "traddr": "10.0.0.2", 00:20:36.436 "trsvcid": "4420" 00:20:36.436 }, 00:20:36.436 "peer_address": { 00:20:36.436 "trtype": "TCP", 00:20:36.436 "adrfam": "IPv4", 00:20:36.436 "traddr": "10.0.0.1", 00:20:36.436 "trsvcid": "38022" 00:20:36.436 }, 00:20:36.436 "auth": { 00:20:36.436 "state": "completed", 00:20:36.436 "digest": "sha512", 00:20:36.436 "dhgroup": "ffdhe6144" 00:20:36.436 } 00:20:36.436 } 00:20:36.436 ]' 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.436 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.696 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.638 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.898 00:20:37.898 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.898 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.898 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.159 { 00:20:38.159 "cntlid": 131, 00:20:38.159 "qid": 0, 00:20:38.159 "state": "enabled", 00:20:38.159 "thread": "nvmf_tgt_poll_group_000", 00:20:38.159 "listen_address": { 00:20:38.159 "trtype": "TCP", 00:20:38.159 "adrfam": "IPv4", 00:20:38.159 "traddr": "10.0.0.2", 00:20:38.159 "trsvcid": "4420" 00:20:38.159 }, 00:20:38.159 "peer_address": { 00:20:38.159 "trtype": "TCP", 00:20:38.159 "adrfam": "IPv4", 00:20:38.159 "traddr": "10.0.0.1", 00:20:38.159 "trsvcid": "60874" 00:20:38.159 }, 00:20:38.159 "auth": { 00:20:38.159 "state": "completed", 00:20:38.159 "digest": "sha512", 00:20:38.159 "dhgroup": "ffdhe6144" 00:20:38.159 } 00:20:38.159 } 00:20:38.159 ]' 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.159 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.419 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.419 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.419 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.420 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.360 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.621 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.880 { 00:20:39.880 "cntlid": 133, 00:20:39.880 "qid": 0, 00:20:39.880 "state": "enabled", 00:20:39.880 "thread": "nvmf_tgt_poll_group_000", 00:20:39.880 "listen_address": { 00:20:39.880 "trtype": "TCP", 00:20:39.880 "adrfam": "IPv4", 00:20:39.880 "traddr": "10.0.0.2", 00:20:39.880 "trsvcid": "4420" 00:20:39.880 }, 00:20:39.880 "peer_address": { 00:20:39.880 "trtype": "TCP", 00:20:39.880 "adrfam": "IPv4", 00:20:39.880 "traddr": "10.0.0.1", 00:20:39.880 "trsvcid": "60912" 00:20:39.880 }, 00:20:39.880 "auth": { 00:20:39.880 "state": "completed", 00:20:39.880 "digest": "sha512", 00:20:39.880 "dhgroup": "ffdhe6144" 00:20:39.880 } 00:20:39.880 } 00:20:39.880 ]' 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.880 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.140 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.140 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.140 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.140 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.140 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.140 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:20:41.082 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.082 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.082 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.082 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.083 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.654 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.654 { 00:20:41.654 "cntlid": 135, 00:20:41.654 "qid": 0, 00:20:41.654 "state": "enabled", 00:20:41.654 "thread": "nvmf_tgt_poll_group_000", 00:20:41.654 "listen_address": { 00:20:41.654 "trtype": "TCP", 00:20:41.654 "adrfam": "IPv4", 00:20:41.654 "traddr": "10.0.0.2", 00:20:41.654 "trsvcid": "4420" 00:20:41.654 }, 00:20:41.654 "peer_address": { 00:20:41.654 "trtype": "TCP", 00:20:41.654 "adrfam": "IPv4", 00:20:41.654 "traddr": "10.0.0.1", 00:20:41.654 "trsvcid": "60932" 00:20:41.654 }, 00:20:41.654 "auth": { 00:20:41.654 "state": "completed", 00:20:41.654 "digest": "sha512", 00:20:41.654 "dhgroup": "ffdhe6144" 00:20:41.654 } 00:20:41.654 } 00:20:41.654 ]' 00:20:41.654 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.654 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.654 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.914 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.914 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.914 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.914 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.914 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.915 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.857 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.857 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.428 00:20:43.428 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.428 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.428 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.689 { 00:20:43.689 "cntlid": 137, 00:20:43.689 "qid": 0, 00:20:43.689 "state": "enabled", 00:20:43.689 "thread": "nvmf_tgt_poll_group_000", 00:20:43.689 "listen_address": { 00:20:43.689 "trtype": "TCP", 00:20:43.689 "adrfam": "IPv4", 00:20:43.689 "traddr": "10.0.0.2", 00:20:43.689 "trsvcid": "4420" 00:20:43.689 }, 00:20:43.689 "peer_address": { 00:20:43.689 "trtype": "TCP", 00:20:43.689 "adrfam": "IPv4", 00:20:43.689 "traddr": "10.0.0.1", 00:20:43.689 "trsvcid": "60954" 00:20:43.689 }, 00:20:43.689 "auth": { 00:20:43.689 "state": "completed", 00:20:43.689 "digest": "sha512", 00:20:43.689 "dhgroup": "ffdhe8192" 00:20:43.689 } 00:20:43.689 } 00:20:43.689 ]' 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:43.689 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.689 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.689 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.689 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.950 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.891 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.891 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.462 00:20:45.462 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.462 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.462 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.723 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.723 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.723 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.723 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.723 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.723 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.723 { 00:20:45.723 "cntlid": 139, 00:20:45.723 "qid": 0, 00:20:45.723 "state": "enabled", 00:20:45.723 "thread": "nvmf_tgt_poll_group_000", 00:20:45.723 "listen_address": { 00:20:45.723 "trtype": "TCP", 00:20:45.723 "adrfam": "IPv4", 00:20:45.723 "traddr": "10.0.0.2", 00:20:45.723 "trsvcid": "4420" 00:20:45.724 }, 00:20:45.724 "peer_address": { 00:20:45.724 "trtype": "TCP", 00:20:45.724 "adrfam": "IPv4", 00:20:45.724 "traddr": "10.0.0.1", 00:20:45.724 "trsvcid": "60974" 00:20:45.724 }, 00:20:45.724 "auth": { 00:20:45.724 "state": "completed", 00:20:45.724 "digest": "sha512", 00:20:45.724 "dhgroup": "ffdhe8192" 00:20:45.724 } 00:20:45.724 } 00:20:45.724 ]' 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.724 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.984 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:OGQyZWZkMTMwOGJiN2MyMjU5MThmNTI3NWQ1MWU3OGW7hyQT: --dhchap-ctrl-secret DHHC-1:02:NTliNDJhOThiYzYzM2FiMzVmODBlOWZiYjEwNDhkYzE5ODc4NmZkNzM5NDg2Njhlr9uvyA==: 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.553 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.844 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.419 00:20:47.419 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.419 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.419 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.419 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.419 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.419 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.419 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.720 { 00:20:47.720 "cntlid": 141, 00:20:47.720 "qid": 0, 00:20:47.720 "state": "enabled", 00:20:47.720 "thread": "nvmf_tgt_poll_group_000", 00:20:47.720 "listen_address": { 00:20:47.720 "trtype": "TCP", 00:20:47.720 "adrfam": "IPv4", 00:20:47.720 "traddr": "10.0.0.2", 00:20:47.720 "trsvcid": "4420" 00:20:47.720 }, 00:20:47.720 "peer_address": { 00:20:47.720 "trtype": "TCP", 00:20:47.720 "adrfam": "IPv4", 00:20:47.720 "traddr": "10.0.0.1", 00:20:47.720 "trsvcid": "54912" 00:20:47.720 }, 00:20:47.720 "auth": { 00:20:47.720 "state": "completed", 00:20:47.720 "digest": "sha512", 00:20:47.720 "dhgroup": "ffdhe8192" 00:20:47.720 } 00:20:47.720 } 00:20:47.720 ]' 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.720 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.981 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:MzVlNDQ4OTA4NzYwYzg1ZWZhOTU1YTQ2YjZkZjllZTA1ZDg2OGNmOGNlNDUzMDE0NVPl3w==: --dhchap-ctrl-secret DHHC-1:01:ZjBiMTA2Y2VhNTc4MjQyNzcyOGRmYWY3OWU4NTc1ZGMyQSaC: 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.552 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.812 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.383 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.383 { 00:20:49.383 "cntlid": 143, 00:20:49.383 "qid": 0, 00:20:49.383 "state": "enabled", 00:20:49.383 "thread": "nvmf_tgt_poll_group_000", 00:20:49.383 "listen_address": { 00:20:49.383 "trtype": "TCP", 00:20:49.383 "adrfam": "IPv4", 00:20:49.383 "traddr": "10.0.0.2", 00:20:49.383 "trsvcid": "4420" 00:20:49.383 }, 00:20:49.383 "peer_address": { 00:20:49.383 "trtype": "TCP", 00:20:49.383 "adrfam": "IPv4", 00:20:49.383 "traddr": "10.0.0.1", 00:20:49.383 "trsvcid": "54930" 00:20:49.383 }, 00:20:49.383 "auth": { 00:20:49.383 "state": "completed", 00:20:49.383 "digest": "sha512", 00:20:49.383 "dhgroup": "ffdhe8192" 00:20:49.383 } 00:20:49.383 } 00:20:49.383 ]' 00:20:49.383 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.644 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.644 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.644 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.644 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.644 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.644 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.644 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.904 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.475 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.736 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.309 00:20:51.309 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.309 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.309 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.309 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.309 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.309 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.309 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.569 { 00:20:51.569 "cntlid": 145, 00:20:51.569 "qid": 0, 00:20:51.569 "state": "enabled", 00:20:51.569 "thread": "nvmf_tgt_poll_group_000", 00:20:51.569 "listen_address": { 00:20:51.569 "trtype": "TCP", 00:20:51.569 "adrfam": "IPv4", 00:20:51.569 "traddr": "10.0.0.2", 00:20:51.569 "trsvcid": "4420" 00:20:51.569 }, 00:20:51.569 "peer_address": { 00:20:51.569 "trtype": "TCP", 00:20:51.569 "adrfam": "IPv4", 00:20:51.569 "traddr": "10.0.0.1", 00:20:51.569 "trsvcid": "54972" 00:20:51.569 }, 00:20:51.569 "auth": { 00:20:51.569 "state": "completed", 00:20:51.569 "digest": "sha512", 00:20:51.569 "dhgroup": "ffdhe8192" 00:20:51.569 } 00:20:51.569 } 00:20:51.569 ]' 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.569 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.830 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:YzkzNzU0ZTFlMjNlZGVjYjk0MTU0ZTI1NTM3NzllZThiMjgzZTRiM2FmZGQzMDY4yFw+VQ==: --dhchap-ctrl-secret DHHC-1:03:Mzk1Zjk5NzFhMzA1MzkwOWRlYzY0NThjN2NiNGNiZjhmZWMzZTJhMDU1ZWZiNjllM2I3OTE0NTZlMmIyZDhlZApgY5o=: 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.402 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:52.664 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.664 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.664 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.925 request: 00:20:52.925 { 00:20:52.925 "name": "nvme0", 00:20:52.925 "trtype": "tcp", 00:20:52.925 "traddr": "10.0.0.2", 00:20:52.925 "adrfam": "ipv4", 00:20:52.925 "trsvcid": "4420", 00:20:52.925 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:52.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:52.925 "prchk_reftag": false, 00:20:52.925 "prchk_guard": false, 00:20:52.925 "hdgst": false, 00:20:52.925 "ddgst": false, 00:20:52.925 "dhchap_key": "key2", 00:20:52.925 "method": "bdev_nvme_attach_controller", 00:20:52.925 "req_id": 1 00:20:52.925 } 00:20:52.925 Got JSON-RPC error response 00:20:52.925 response: 00:20:52.925 { 00:20:52.925 "code": -5, 00:20:52.925 "message": "Input/output error" 00:20:52.925 } 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:52.925 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:53.497 request: 00:20:53.497 { 00:20:53.497 "name": "nvme0", 00:20:53.497 "trtype": "tcp", 00:20:53.497 "traddr": "10.0.0.2", 00:20:53.497 "adrfam": "ipv4", 00:20:53.497 "trsvcid": "4420", 00:20:53.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:53.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:53.497 "prchk_reftag": false, 00:20:53.497 "prchk_guard": false, 00:20:53.497 "hdgst": false, 00:20:53.497 "ddgst": false, 00:20:53.497 "dhchap_key": "key1", 00:20:53.497 "dhchap_ctrlr_key": "ckey2", 00:20:53.497 "method": "bdev_nvme_attach_controller", 00:20:53.497 "req_id": 1 00:20:53.497 } 00:20:53.497 Got JSON-RPC error response 00:20:53.497 response: 00:20:53.497 { 00:20:53.497 "code": -5, 00:20:53.497 "message": "Input/output error" 00:20:53.497 } 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.497 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.069 request: 00:20:54.069 { 00:20:54.069 "name": "nvme0", 00:20:54.069 "trtype": "tcp", 00:20:54.069 "traddr": "10.0.0.2", 00:20:54.069 "adrfam": "ipv4", 00:20:54.069 "trsvcid": "4420", 00:20:54.069 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.069 "prchk_reftag": false, 00:20:54.069 "prchk_guard": false, 00:20:54.069 "hdgst": false, 00:20:54.069 "ddgst": false, 00:20:54.069 "dhchap_key": "key1", 00:20:54.069 "dhchap_ctrlr_key": "ckey1", 00:20:54.069 "method": "bdev_nvme_attach_controller", 00:20:54.069 "req_id": 1 00:20:54.069 } 00:20:54.069 Got JSON-RPC error response 00:20:54.069 response: 00:20:54.069 { 00:20:54.069 "code": -5, 00:20:54.069 "message": "Input/output error" 00:20:54.069 } 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 89652 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 89652 ']' 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 89652 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89652 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89652' 00:20:54.069 killing process with pid 89652 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 89652 00:20:54.069 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 89652 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=116160 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 116160 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 116160 ']' 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:54.330 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 116160 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 116160 ']' 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.272 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.844 00:20:55.844 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.844 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.844 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.106 { 00:20:56.106 "cntlid": 1, 00:20:56.106 "qid": 0, 00:20:56.106 "state": "enabled", 00:20:56.106 "thread": "nvmf_tgt_poll_group_000", 00:20:56.106 "listen_address": { 00:20:56.106 "trtype": "TCP", 00:20:56.106 "adrfam": "IPv4", 00:20:56.106 "traddr": "10.0.0.2", 00:20:56.106 "trsvcid": "4420" 00:20:56.106 }, 00:20:56.106 "peer_address": { 00:20:56.106 "trtype": "TCP", 00:20:56.106 "adrfam": "IPv4", 00:20:56.106 "traddr": "10.0.0.1", 00:20:56.106 "trsvcid": "55008" 00:20:56.106 }, 00:20:56.106 "auth": { 00:20:56.106 "state": "completed", 00:20:56.106 "digest": "sha512", 00:20:56.106 "dhgroup": "ffdhe8192" 00:20:56.106 } 00:20:56.106 } 00:20:56.106 ]' 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.106 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.366 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:YTMxNmJlZTNiYTczMTc1OGM0YzNkYTliOTU3YjkyOTg3ZWIxNjJjODdkZWEzMzEzMDI4ODRmYjg5NjgxNjU0Y44nM0w=: 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.308 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.308 request: 00:20:57.308 { 00:20:57.308 "name": "nvme0", 00:20:57.308 "trtype": "tcp", 00:20:57.308 "traddr": "10.0.0.2", 00:20:57.308 "adrfam": "ipv4", 00:20:57.309 "trsvcid": "4420", 00:20:57.309 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:57.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:57.309 "prchk_reftag": false, 00:20:57.309 "prchk_guard": false, 00:20:57.309 "hdgst": false, 00:20:57.309 "ddgst": false, 00:20:57.309 "dhchap_key": "key3", 00:20:57.309 "method": "bdev_nvme_attach_controller", 00:20:57.309 "req_id": 1 00:20:57.309 } 00:20:57.309 Got JSON-RPC error response 00:20:57.309 response: 00:20:57.309 { 00:20:57.309 "code": -5, 00:20:57.309 "message": "Input/output error" 00:20:57.309 } 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.569 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.829 request: 00:20:57.829 { 00:20:57.829 "name": "nvme0", 00:20:57.829 "trtype": "tcp", 00:20:57.829 "traddr": "10.0.0.2", 00:20:57.829 "adrfam": "ipv4", 00:20:57.829 "trsvcid": "4420", 00:20:57.830 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:57.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:57.830 "prchk_reftag": false, 00:20:57.830 "prchk_guard": false, 00:20:57.830 "hdgst": false, 00:20:57.830 "ddgst": false, 00:20:57.830 "dhchap_key": "key3", 00:20:57.830 "method": "bdev_nvme_attach_controller", 00:20:57.830 "req_id": 1 00:20:57.830 } 00:20:57.830 Got JSON-RPC error response 00:20:57.830 response: 00:20:57.830 { 00:20:57.830 "code": -5, 00:20:57.830 "message": "Input/output error" 00:20:57.830 } 00:20:57.830 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:57.830 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:57.830 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:57.830 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:57.830 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:57.830 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.091 request: 00:20:58.091 { 00:20:58.091 "name": "nvme0", 00:20:58.091 "trtype": "tcp", 00:20:58.091 "traddr": "10.0.0.2", 00:20:58.091 "adrfam": "ipv4", 00:20:58.091 "trsvcid": "4420", 00:20:58.091 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:58.091 "prchk_reftag": false, 00:20:58.091 "prchk_guard": false, 00:20:58.091 "hdgst": false, 00:20:58.091 "ddgst": false, 00:20:58.091 "dhchap_key": "key0", 00:20:58.091 "dhchap_ctrlr_key": "key1", 00:20:58.091 "method": "bdev_nvme_attach_controller", 00:20:58.091 "req_id": 1 00:20:58.091 } 00:20:58.091 Got JSON-RPC error response 00:20:58.091 response: 00:20:58.091 { 00:20:58.091 "code": -5, 00:20:58.091 "message": "Input/output error" 00:20:58.091 } 00:20:58.091 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:58.091 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.091 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.091 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.091 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:58.091 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:58.351 00:20:58.351 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:58.351 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:58.351 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 89998 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 89998 ']' 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 89998 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89998 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89998' 00:20:58.611 killing process with pid 89998 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 89998 00:20:58.611 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 89998 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:58.872 rmmod nvme_tcp 00:20:58.872 rmmod nvme_fabrics 00:20:58.872 rmmod nvme_keyring 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 116160 ']' 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 116160 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 116160 ']' 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 116160 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.872 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 116160 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 116160' 00:20:59.132 killing process with pid 116160 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 116160 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 116160 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.132 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qKl /tmp/spdk.key-sha256.29C /tmp/spdk.key-sha384.YZs /tmp/spdk.key-sha512.ENy /tmp/spdk.key-sha512.GWf /tmp/spdk.key-sha384.VvG /tmp/spdk.key-sha256.y9Y '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:01.680 00:21:01.680 real 2m24.239s 00:21:01.680 user 5m21.002s 00:21:01.680 sys 0m21.414s 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.680 ************************************ 00:21:01.680 END TEST nvmf_auth_target 00:21:01.680 ************************************ 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:01.680 ************************************ 00:21:01.680 START TEST nvmf_bdevio_no_huge 00:21:01.680 ************************************ 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:01.680 * Looking for test storage... 00:21:01.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.680 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.681 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:08.337 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:08.337 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:08.337 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:08.337 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.337 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:21:08.599 00:21:08.599 --- 10.0.0.2 ping statistics --- 00:21:08.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.599 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:21:08.599 00:21:08.599 --- 10.0.0.1 ping statistics --- 00:21:08.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.599 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=121839 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 121839 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 121839 ']' 00:21:08.599 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.600 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.600 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.600 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.600 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.861 [2024-07-25 07:27:15.998654] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:08.861 [2024-07-25 07:27:15.998728] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:08.861 [2024-07-25 07:27:16.093982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.861 [2024-07-25 07:27:16.201632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.862 [2024-07-25 07:27:16.201686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.862 [2024-07-25 07:27:16.201695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.862 [2024-07-25 07:27:16.201702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.862 [2024-07-25 07:27:16.201708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.862 [2024-07-25 07:27:16.201869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:08.862 [2024-07-25 07:27:16.202027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:21:08.862 [2024-07-25 07:27:16.202190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.862 [2024-07-25 07:27:16.202190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:21:09.434 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.434 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:09.434 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.434 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:09.434 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.696 [2024-07-25 07:27:16.840789] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.696 Malloc0 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.696 [2024-07-25 07:27:16.886328] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.696 { 00:21:09.696 "params": { 00:21:09.696 "name": "Nvme$subsystem", 00:21:09.696 "trtype": "$TEST_TRANSPORT", 00:21:09.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.696 "adrfam": "ipv4", 00:21:09.696 "trsvcid": "$NVMF_PORT", 00:21:09.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.696 "hdgst": ${hdgst:-false}, 00:21:09.696 "ddgst": ${ddgst:-false} 00:21:09.696 }, 00:21:09.696 "method": "bdev_nvme_attach_controller" 00:21:09.696 } 00:21:09.696 EOF 00:21:09.696 )") 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:09.696 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:09.696 "params": { 00:21:09.696 "name": "Nvme1", 00:21:09.696 "trtype": "tcp", 00:21:09.696 "traddr": "10.0.0.2", 00:21:09.696 "adrfam": "ipv4", 00:21:09.696 "trsvcid": "4420", 00:21:09.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.696 "hdgst": false, 00:21:09.696 "ddgst": false 00:21:09.696 }, 00:21:09.696 "method": "bdev_nvme_attach_controller" 00:21:09.696 }' 00:21:09.696 [2024-07-25 07:27:16.941146] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:09.696 [2024-07-25 07:27:16.941218] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid122039 ] 00:21:09.696 [2024-07-25 07:27:17.008528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.957 [2024-07-25 07:27:17.105178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.957 [2024-07-25 07:27:17.105320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.957 [2024-07-25 07:27:17.105417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.957 I/O targets: 00:21:09.957 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:09.957 00:21:09.957 00:21:09.957 CUnit - A unit testing framework for C - Version 2.1-3 00:21:09.957 http://cunit.sourceforge.net/ 00:21:09.957 00:21:09.957 00:21:09.957 Suite: bdevio tests on: Nvme1n1 00:21:09.957 Test: blockdev write read block ...passed 00:21:10.218 Test: blockdev write zeroes read block ...passed 00:21:10.218 Test: blockdev write zeroes read no split ...passed 00:21:10.218 Test: blockdev write zeroes read split ...passed 00:21:10.218 Test: blockdev write zeroes read split partial ...passed 00:21:10.218 Test: blockdev reset ...[2024-07-25 07:27:17.489604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.218 [2024-07-25 07:27:17.489668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83ac70 (9): Bad file descriptor 00:21:10.218 [2024-07-25 07:27:17.503732] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:10.218 passed 00:21:10.218 Test: blockdev write read 8 blocks ...passed 00:21:10.218 Test: blockdev write read size > 128k ...passed 00:21:10.218 Test: blockdev write read invalid size ...passed 00:21:10.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.218 Test: blockdev write read max offset ...passed 00:21:10.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.479 Test: blockdev writev readv 8 blocks ...passed 00:21:10.479 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.479 Test: blockdev writev readv block ...passed 00:21:10.479 Test: blockdev writev readv size > 128k ...passed 00:21:10.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.479 Test: blockdev comparev and writev ...[2024-07-25 07:27:17.778158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.778182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.479 [2024-07-25 07:27:17.778193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.778199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:10.479 [2024-07-25 07:27:17.778853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.778862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:10.479 [2024-07-25 07:27:17.778872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.778878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:10.479 [2024-07-25 07:27:17.779479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.779487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:10.479 [2024-07-25 07:27:17.779497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.779502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:10.479 [2024-07-25 07:27:17.780111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.780118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:10.479 [2024-07-25 07:27:17.780128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.479 [2024-07-25 07:27:17.780133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:10.479 passed 00:21:10.741 Test: blockdev nvme passthru rw ...passed 00:21:10.741 Test: blockdev nvme passthru vendor specific ...[2024-07-25 07:27:17.866162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.741 [2024-07-25 07:27:17.866175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:10.741 [2024-07-25 07:27:17.866636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.741 [2024-07-25 07:27:17.866644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:10.741 [2024-07-25 07:27:17.867114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.741 [2024-07-25 07:27:17.867122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:10.741 [2024-07-25 07:27:17.867601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.741 [2024-07-25 07:27:17.867609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:10.741 passed 00:21:10.741 Test: blockdev nvme admin passthru ...passed 00:21:10.741 Test: blockdev copy ...passed 00:21:10.741 00:21:10.741 Run Summary: Type Total Ran Passed Failed Inactive 00:21:10.741 suites 1 1 n/a 0 0 00:21:10.741 tests 23 23 23 0 0 00:21:10.741 asserts 152 152 152 0 n/a 00:21:10.741 00:21:10.741 Elapsed time = 1.364 seconds 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.001 rmmod nvme_tcp 00:21:11.001 rmmod nvme_fabrics 00:21:11.001 rmmod nvme_keyring 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 121839 ']' 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 121839 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 121839 ']' 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 121839 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 121839 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 121839' 00:21:11.001 killing process with pid 121839 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 121839 00:21:11.001 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 121839 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.261 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.808 00:21:13.808 real 0m12.076s 00:21:13.808 user 0m13.337s 00:21:13.808 sys 0m6.364s 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:13.808 ************************************ 00:21:13.808 END TEST nvmf_bdevio_no_huge 00:21:13.808 ************************************ 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.808 ************************************ 00:21:13.808 START TEST nvmf_tls 00:21:13.808 ************************************ 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.808 * Looking for test storage... 00:21:13.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.808 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.397 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:20.398 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:20.398 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:20.398 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:20.398 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:20.398 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:20.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.860 ms 00:21:20.658 00:21:20.658 --- 10.0.0.2 ping statistics --- 00:21:20.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.658 rtt min/avg/max/mdev = 0.860/0.860/0.860/0.000 ms 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:21:20.658 00:21:20.658 --- 10.0.0.1 ping statistics --- 00:21:20.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.658 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=126378 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 126378 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:20.658 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 126378 ']' 00:21:20.659 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.659 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.659 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.659 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.659 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.659 [2024-07-25 07:27:27.971343] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:20.659 [2024-07-25 07:27:27.971393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.659 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.919 [2024-07-25 07:27:28.054985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.919 [2024-07-25 07:27:28.117953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.919 [2024-07-25 07:27:28.117991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.919 [2024-07-25 07:27:28.117999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.919 [2024-07-25 07:27:28.118005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.919 [2024-07-25 07:27:28.118011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.919 [2024-07-25 07:27:28.118030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:21.491 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:21.752 true 00:21:21.752 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:21.752 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:22.012 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:22.012 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:22.012 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:22.012 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:22.012 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:22.274 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:22.274 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:22.274 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:22.535 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:22.535 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:22.535 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:22.535 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:22.535 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:22.535 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:22.797 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:22.797 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:22.797 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:23.059 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:23.059 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:23.059 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:23.059 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:23.059 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:23.320 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:23.320 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:23.580 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:23.580 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:23.580 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.1H5caf9KtC 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.h80ONmVVSM 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.1H5caf9KtC 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.h80ONmVVSM 00:21:23.581 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:23.841 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:23.841 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.1H5caf9KtC 00:21:23.841 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.1H5caf9KtC 00:21:23.841 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:24.102 [2024-07-25 07:27:31.330271] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.102 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.362 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.362 [2024-07-25 07:27:31.639022] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.362 [2024-07-25 07:27:31.639231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.362 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.626 malloc0 00:21:24.626 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.626 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1H5caf9KtC 00:21:24.928 [2024-07-25 07:27:32.090144] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:24.928 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1H5caf9KtC 00:21:24.928 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.933 Initializing NVMe Controllers 00:21:34.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.933 Initialization complete. Launching workers. 00:21:34.933 ======================================================== 00:21:34.933 Latency(us) 00:21:34.933 Device Information : IOPS MiB/s Average min max 00:21:34.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18696.05 73.03 3423.25 984.14 6295.52 00:21:34.933 ======================================================== 00:21:34.933 Total : 18696.05 73.03 3423.25 984.14 6295.52 00:21:34.933 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H5caf9KtC 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1H5caf9KtC' 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=129221 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 129221 /var/tmp/bdevperf.sock 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 129221 ']' 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.933 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.933 [2024-07-25 07:27:42.274133] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:34.933 [2024-07-25 07:27:42.274192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129221 ] 00:21:34.933 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.194 [2024-07-25 07:27:42.323124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.194 [2024-07-25 07:27:42.375879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.766 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.766 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:35.766 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1H5caf9KtC 00:21:36.026 [2024-07-25 07:27:43.160796] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.026 [2024-07-25 07:27:43.160858] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:36.026 TLSTESTn1 00:21:36.026 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:36.026 Running I/O for 10 seconds... 00:21:48.257 00:21:48.257 Latency(us) 00:21:48.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.257 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:48.257 Verification LBA range: start 0x0 length 0x2000 00:21:48.257 TLSTESTn1 : 10.08 2052.96 8.02 0.00 0.00 62134.39 6116.69 158160.21 00:21:48.257 =================================================================================================================== 00:21:48.258 Total : 2052.96 8.02 0.00 0.00 62134.39 6116.69 158160.21 00:21:48.258 0 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 129221 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 129221 ']' 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 129221 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 129221 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 129221' 00:21:48.258 killing process with pid 129221 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 129221 00:21:48.258 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.258 00:21:48.258 Latency(us) 00:21:48.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.258 =================================================================================================================== 00:21:48.258 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.258 [2024-07-25 07:27:53.524862] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 129221 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h80ONmVVSM 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h80ONmVVSM 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h80ONmVVSM 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.h80ONmVVSM' 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=131450 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 131450 /var/tmp/bdevperf.sock 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 131450 ']' 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.258 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.258 [2024-07-25 07:27:53.689770] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:48.258 [2024-07-25 07:27:53.689824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131450 ] 00:21:48.258 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.258 [2024-07-25 07:27:53.739774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.258 [2024-07-25 07:27:53.791255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h80ONmVVSM 00:21:48.258 [2024-07-25 07:27:54.608316] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.258 [2024-07-25 07:27:54.608383] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:48.258 [2024-07-25 07:27:54.613002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:48.258 [2024-07-25 07:27:54.613403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1866f20 (107): Transport endpoint is not connected 00:21:48.258 [2024-07-25 07:27:54.614396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1866f20 (9): Bad file descriptor 00:21:48.258 [2024-07-25 07:27:54.615398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.258 [2024-07-25 07:27:54.615405] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:48.258 [2024-07-25 07:27:54.615412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.258 request: 00:21:48.258 { 00:21:48.258 "name": "TLSTEST", 00:21:48.258 "trtype": "tcp", 00:21:48.258 "traddr": "10.0.0.2", 00:21:48.258 "adrfam": "ipv4", 00:21:48.258 "trsvcid": "4420", 00:21:48.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.258 "prchk_reftag": false, 00:21:48.258 "prchk_guard": false, 00:21:48.258 "hdgst": false, 00:21:48.258 "ddgst": false, 00:21:48.258 "psk": "/tmp/tmp.h80ONmVVSM", 00:21:48.258 "method": "bdev_nvme_attach_controller", 00:21:48.258 "req_id": 1 00:21:48.258 } 00:21:48.258 Got JSON-RPC error response 00:21:48.258 response: 00:21:48.258 { 00:21:48.258 "code": -5, 00:21:48.258 "message": "Input/output error" 00:21:48.258 } 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 131450 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 131450 ']' 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 131450 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131450 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131450' 00:21:48.258 killing process with pid 131450 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 131450 00:21:48.258 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.258 00:21:48.258 Latency(us) 00:21:48.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.258 =================================================================================================================== 00:21:48.258 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.258 [2024-07-25 07:27:54.700508] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 131450 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1H5caf9KtC 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1H5caf9KtC 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:48.258 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1H5caf9KtC 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1H5caf9KtC' 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=131786 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 131786 /var/tmp/bdevperf.sock 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 131786 ']' 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.259 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.259 [2024-07-25 07:27:54.856469] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:48.259 [2024-07-25 07:27:54.856526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131786 ] 00:21:48.259 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.259 [2024-07-25 07:27:54.906505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.259 [2024-07-25 07:27:54.957812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.519 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.519 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:48.519 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.1H5caf9KtC 00:21:48.519 [2024-07-25 07:27:55.766670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.519 [2024-07-25 07:27:55.766735] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:48.519 [2024-07-25 07:27:55.775740] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:48.519 [2024-07-25 07:27:55.775762] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:48.519 [2024-07-25 07:27:55.775782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:48.519 [2024-07-25 07:27:55.777031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d09f20 (107): Transport endpoint is not connected 00:21:48.519 [2024-07-25 07:27:55.778027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d09f20 (9): Bad file descriptor 00:21:48.519 [2024-07-25 07:27:55.779028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.519 [2024-07-25 07:27:55.779036] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:48.520 [2024-07-25 07:27:55.779043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.520 request: 00:21:48.520 { 00:21:48.520 "name": "TLSTEST", 00:21:48.520 "trtype": "tcp", 00:21:48.520 "traddr": "10.0.0.2", 00:21:48.520 "adrfam": "ipv4", 00:21:48.520 "trsvcid": "4420", 00:21:48.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.520 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:48.520 "prchk_reftag": false, 00:21:48.520 "prchk_guard": false, 00:21:48.520 "hdgst": false, 00:21:48.520 "ddgst": false, 00:21:48.520 "psk": "/tmp/tmp.1H5caf9KtC", 00:21:48.520 "method": "bdev_nvme_attach_controller", 00:21:48.520 "req_id": 1 00:21:48.520 } 00:21:48.520 Got JSON-RPC error response 00:21:48.520 response: 00:21:48.520 { 00:21:48.520 "code": -5, 00:21:48.520 "message": "Input/output error" 00:21:48.520 } 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 131786 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 131786 ']' 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 131786 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131786 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131786' 00:21:48.520 killing process with pid 131786 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 131786 00:21:48.520 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.520 00:21:48.520 Latency(us) 00:21:48.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.520 =================================================================================================================== 00:21:48.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.520 [2024-07-25 07:27:55.866897] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:48.520 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 131786 00:21:48.780 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H5caf9KtC 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H5caf9KtC 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1H5caf9KtC 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1H5caf9KtC' 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=131873 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 131873 /var/tmp/bdevperf.sock 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 131873 ']' 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.781 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.781 [2024-07-25 07:27:56.006329] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:48.781 [2024-07-25 07:27:56.006389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131873 ] 00:21:48.781 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.781 [2024-07-25 07:27:56.055126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.781 [2024-07-25 07:27:56.107995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1H5caf9KtC 00:21:49.042 [2024-07-25 07:27:56.327530] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.042 [2024-07-25 07:27:56.327593] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:49.042 [2024-07-25 07:27:56.337092] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:49.042 [2024-07-25 07:27:56.337111] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:49.042 [2024-07-25 07:27:56.337133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:49.042 [2024-07-25 07:27:56.337850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5af20 (107): Transport endpoint is not connected 00:21:49.042 [2024-07-25 07:27:56.338845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5af20 (9): Bad file descriptor 00:21:49.042 [2024-07-25 07:27:56.339847] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:49.042 [2024-07-25 07:27:56.339855] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:49.042 [2024-07-25 07:27:56.339862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:49.042 request: 00:21:49.042 { 00:21:49.042 "name": "TLSTEST", 00:21:49.042 "trtype": "tcp", 00:21:49.042 "traddr": "10.0.0.2", 00:21:49.042 "adrfam": "ipv4", 00:21:49.042 "trsvcid": "4420", 00:21:49.042 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.042 "prchk_reftag": false, 00:21:49.042 "prchk_guard": false, 00:21:49.042 "hdgst": false, 00:21:49.042 "ddgst": false, 00:21:49.042 "psk": "/tmp/tmp.1H5caf9KtC", 00:21:49.042 "method": "bdev_nvme_attach_controller", 00:21:49.042 "req_id": 1 00:21:49.042 } 00:21:49.042 Got JSON-RPC error response 00:21:49.042 response: 00:21:49.042 { 00:21:49.042 "code": -5, 00:21:49.042 "message": "Input/output error" 00:21:49.042 } 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 131873 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 131873 ']' 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 131873 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.042 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 131873 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 131873' 00:21:49.303 killing process with pid 131873 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 131873 00:21:49.303 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.303 00:21:49.303 Latency(us) 00:21:49.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.303 =================================================================================================================== 00:21:49.303 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.303 [2024-07-25 07:27:56.427908] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 131873 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=132141 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:49.303 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 132141 /var/tmp/bdevperf.sock 00:21:49.304 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:49.304 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 132141 ']' 00:21:49.304 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.304 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.304 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.304 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.304 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 [2024-07-25 07:27:56.583354] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:49.304 [2024-07-25 07:27:56.583407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132141 ] 00:21:49.304 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.304 [2024-07-25 07:27:56.633299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.564 [2024-07-25 07:27:56.684215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.136 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.136 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:50.136 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:50.397 [2024-07-25 07:27:57.509991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:50.397 [2024-07-25 07:27:57.511754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9a550 (9): Bad file descriptor 00:21:50.397 [2024-07-25 07:27:57.512753] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.397 [2024-07-25 07:27:57.512764] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:50.397 [2024-07-25 07:27:57.512771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.397 request: 00:21:50.397 { 00:21:50.397 "name": "TLSTEST", 00:21:50.397 "trtype": "tcp", 00:21:50.397 "traddr": "10.0.0.2", 00:21:50.397 "adrfam": "ipv4", 00:21:50.397 "trsvcid": "4420", 00:21:50.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.397 "prchk_reftag": false, 00:21:50.397 "prchk_guard": false, 00:21:50.397 "hdgst": false, 00:21:50.397 "ddgst": false, 00:21:50.397 "method": "bdev_nvme_attach_controller", 00:21:50.397 "req_id": 1 00:21:50.397 } 00:21:50.397 Got JSON-RPC error response 00:21:50.397 response: 00:21:50.397 { 00:21:50.397 "code": -5, 00:21:50.397 "message": "Input/output error" 00:21:50.397 } 00:21:50.397 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 132141 00:21:50.397 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 132141 ']' 00:21:50.397 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 132141 00:21:50.397 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:50.397 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132141 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132141' 00:21:50.398 killing process with pid 132141 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 132141 00:21:50.398 Received shutdown signal, test time was about 10.000000 seconds 00:21:50.398 00:21:50.398 Latency(us) 00:21:50.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.398 =================================================================================================================== 00:21:50.398 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 132141 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 126378 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 126378 ']' 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 126378 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 126378 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 126378' 00:21:50.398 killing process with pid 126378 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 126378 00:21:50.398 [2024-07-25 07:27:57.758380] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:50.398 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 126378 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.pqHjbzYrmH 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.pqHjbzYrmH 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=132352 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 132352 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 132352 ']' 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.659 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.659 [2024-07-25 07:27:57.994185] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:50.659 [2024-07-25 07:27:57.994250] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.659 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.920 [2024-07-25 07:27:58.076410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.920 [2024-07-25 07:27:58.133677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.920 [2024-07-25 07:27:58.133711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.920 [2024-07-25 07:27:58.133717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.920 [2024-07-25 07:27:58.133722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.920 [2024-07-25 07:27:58.133726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.920 [2024-07-25 07:27:58.133744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.pqHjbzYrmH 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pqHjbzYrmH 00:21:51.491 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:51.752 [2024-07-25 07:27:58.936234] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.752 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:51.752 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:52.012 [2024-07-25 07:27:59.249001] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.012 [2024-07-25 07:27:59.249198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.012 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:52.273 malloc0 00:21:52.273 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:52.273 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pqHjbzYrmH 00:21:52.535 [2024-07-25 07:27:59.707950] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pqHjbzYrmH 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pqHjbzYrmH' 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=132752 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 132752 /var/tmp/bdevperf.sock 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 132752 ']' 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.535 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.535 [2024-07-25 07:27:59.774325] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:21:52.535 [2024-07-25 07:27:59.774376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132752 ] 00:21:52.535 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.535 [2024-07-25 07:27:59.824555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.535 [2024-07-25 07:27:59.876901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.476 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.476 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:53.476 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pqHjbzYrmH 00:21:53.476 [2024-07-25 07:28:00.677963] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:53.476 [2024-07-25 07:28:00.678023] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:53.476 TLSTESTn1 00:21:53.476 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:53.737 Running I/O for 10 seconds... 00:22:03.780 00:22:03.780 Latency(us) 00:22:03.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.780 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:03.780 Verification LBA range: start 0x0 length 0x2000 00:22:03.780 TLSTESTn1 : 10.04 2657.43 10.38 0.00 0.00 48065.00 6089.39 114469.55 00:22:03.780 =================================================================================================================== 00:22:03.780 Total : 2657.43 10.38 0.00 0.00 48065.00 6089.39 114469.55 00:22:03.780 0 00:22:03.780 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.780 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 132752 00:22:03.780 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 132752 ']' 00:22:03.780 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 132752 00:22:03.780 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.780 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.780 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132752 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132752' 00:22:03.780 killing process with pid 132752 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 132752 00:22:03.780 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.780 00:22:03.780 Latency(us) 00:22:03.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.780 =================================================================================================================== 00:22:03.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.780 [2024-07-25 07:28:11.018124] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 132752 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.pqHjbzYrmH 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pqHjbzYrmH 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pqHjbzYrmH 00:22:03.780 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pqHjbzYrmH 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pqHjbzYrmH' 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=134879 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 134879 /var/tmp/bdevperf.sock 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 134879 ']' 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.781 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.042 [2024-07-25 07:28:11.196162] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:04.042 [2024-07-25 07:28:11.196229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134879 ] 00:22:04.042 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.042 [2024-07-25 07:28:11.246193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.042 [2024-07-25 07:28:11.297309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.042 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.042 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:04.042 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pqHjbzYrmH 00:22:04.304 [2024-07-25 07:28:11.500773] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.304 [2024-07-25 07:28:11.500813] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:04.304 [2024-07-25 07:28:11.500819] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.pqHjbzYrmH 00:22:04.304 request: 00:22:04.304 { 00:22:04.304 "name": "TLSTEST", 00:22:04.304 "trtype": "tcp", 00:22:04.304 "traddr": "10.0.0.2", 00:22:04.304 "adrfam": "ipv4", 00:22:04.304 "trsvcid": "4420", 00:22:04.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.304 "prchk_reftag": false, 00:22:04.304 "prchk_guard": false, 00:22:04.304 "hdgst": false, 00:22:04.304 "ddgst": false, 00:22:04.304 "psk": "/tmp/tmp.pqHjbzYrmH", 00:22:04.304 "method": "bdev_nvme_attach_controller", 00:22:04.304 "req_id": 1 00:22:04.304 } 00:22:04.304 Got JSON-RPC error response 00:22:04.304 response: 00:22:04.304 { 00:22:04.304 "code": -1, 00:22:04.304 "message": "Operation not permitted" 00:22:04.304 } 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 134879 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 134879 ']' 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 134879 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 134879 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 134879' 00:22:04.304 killing process with pid 134879 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 134879 00:22:04.304 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.304 00:22:04.304 Latency(us) 00:22:04.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.304 =================================================================================================================== 00:22:04.304 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.304 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 134879 00:22:04.565 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 132352 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 132352 ']' 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 132352 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 132352 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 132352' 00:22:04.566 killing process with pid 132352 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 132352 00:22:04.566 [2024-07-25 07:28:11.750647] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 132352 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=135129 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 135129 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 135129 ']' 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:04.566 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.566 [2024-07-25 07:28:11.929239] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:04.566 [2024-07-25 07:28:11.929295] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.827 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.827 [2024-07-25 07:28:12.011760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.827 [2024-07-25 07:28:12.065601] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.827 [2024-07-25 07:28:12.065633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.827 [2024-07-25 07:28:12.065639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.827 [2024-07-25 07:28:12.065643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.827 [2024-07-25 07:28:12.065647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.827 [2024-07-25 07:28:12.065661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.pqHjbzYrmH 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.pqHjbzYrmH 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.pqHjbzYrmH 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pqHjbzYrmH 00:22:05.399 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:05.659 [2024-07-25 07:28:12.875698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.659 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:05.919 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:05.919 [2024-07-25 07:28:13.188457] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.919 [2024-07-25 07:28:13.188662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.919 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:06.178 malloc0 00:22:06.178 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:06.178 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pqHjbzYrmH 00:22:06.438 [2024-07-25 07:28:13.659443] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:06.438 [2024-07-25 07:28:13.659464] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:06.438 [2024-07-25 07:28:13.659484] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:06.438 request: 00:22:06.438 { 00:22:06.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.438 "host": "nqn.2016-06.io.spdk:host1", 00:22:06.438 "psk": "/tmp/tmp.pqHjbzYrmH", 00:22:06.438 "method": "nvmf_subsystem_add_host", 00:22:06.438 "req_id": 1 00:22:06.438 } 00:22:06.438 Got JSON-RPC error response 00:22:06.438 response: 00:22:06.438 { 00:22:06.438 "code": -32603, 00:22:06.438 "message": "Internal error" 00:22:06.438 } 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 135129 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 135129 ']' 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 135129 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 135129 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 135129' 00:22:06.438 killing process with pid 135129 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 135129 00:22:06.438 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 135129 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.pqHjbzYrmH 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=135589 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 135589 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:06.699 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 135589 ']' 00:22:06.700 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.700 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.700 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.700 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.700 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.700 [2024-07-25 07:28:13.910352] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:06.700 [2024-07-25 07:28:13.910404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.700 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.700 [2024-07-25 07:28:13.989451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.700 [2024-07-25 07:28:14.041048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.700 [2024-07-25 07:28:14.041082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.700 [2024-07-25 07:28:14.041087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.700 [2024-07-25 07:28:14.041092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.700 [2024-07-25 07:28:14.041096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.700 [2024-07-25 07:28:14.041115] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.pqHjbzYrmH 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pqHjbzYrmH 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:07.643 [2024-07-25 07:28:14.891439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.643 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:07.904 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.904 [2024-07-25 07:28:15.204206] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.904 [2024-07-25 07:28:15.204407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.904 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:08.165 malloc0 00:22:08.165 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pqHjbzYrmH 00:22:08.426 [2024-07-25 07:28:15.675118] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=135952 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 135952 /var/tmp/bdevperf.sock 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 135952 ']' 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.426 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.426 [2024-07-25 07:28:15.723685] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:08.427 [2024-07-25 07:28:15.723769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135952 ] 00:22:08.427 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.427 [2024-07-25 07:28:15.779935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.688 [2024-07-25 07:28:15.831872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.688 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.688 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:08.688 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pqHjbzYrmH 00:22:08.688 [2024-07-25 07:28:16.051259] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.688 [2024-07-25 07:28:16.051325] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:08.949 TLSTESTn1 00:22:08.949 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:09.210 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:09.210 "subsystems": [ 00:22:09.210 { 00:22:09.210 "subsystem": "keyring", 00:22:09.210 "config": [] 00:22:09.210 }, 00:22:09.210 { 00:22:09.210 "subsystem": "iobuf", 00:22:09.210 "config": [ 00:22:09.210 { 00:22:09.210 "method": "iobuf_set_options", 00:22:09.210 "params": { 00:22:09.210 "small_pool_count": 8192, 00:22:09.210 "large_pool_count": 1024, 00:22:09.211 "small_bufsize": 8192, 00:22:09.211 "large_bufsize": 135168 00:22:09.211 } 00:22:09.211 } 00:22:09.211 ] 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "subsystem": "sock", 00:22:09.211 "config": [ 00:22:09.211 { 00:22:09.211 "method": "sock_set_default_impl", 00:22:09.211 "params": { 00:22:09.211 "impl_name": "posix" 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "sock_impl_set_options", 00:22:09.211 "params": { 00:22:09.211 "impl_name": "ssl", 00:22:09.211 "recv_buf_size": 4096, 00:22:09.211 "send_buf_size": 4096, 00:22:09.211 "enable_recv_pipe": true, 00:22:09.211 "enable_quickack": false, 00:22:09.211 "enable_placement_id": 0, 00:22:09.211 "enable_zerocopy_send_server": true, 00:22:09.211 "enable_zerocopy_send_client": false, 00:22:09.211 "zerocopy_threshold": 0, 00:22:09.211 "tls_version": 0, 00:22:09.211 "enable_ktls": false 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "sock_impl_set_options", 00:22:09.211 "params": { 00:22:09.211 "impl_name": "posix", 00:22:09.211 "recv_buf_size": 2097152, 00:22:09.211 "send_buf_size": 2097152, 00:22:09.211 "enable_recv_pipe": true, 00:22:09.211 "enable_quickack": false, 00:22:09.211 "enable_placement_id": 0, 00:22:09.211 "enable_zerocopy_send_server": true, 00:22:09.211 "enable_zerocopy_send_client": false, 00:22:09.211 "zerocopy_threshold": 0, 00:22:09.211 "tls_version": 0, 00:22:09.211 "enable_ktls": false 00:22:09.211 } 00:22:09.211 } 00:22:09.211 ] 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "subsystem": "vmd", 00:22:09.211 "config": [] 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "subsystem": "accel", 00:22:09.211 "config": [ 00:22:09.211 { 00:22:09.211 "method": "accel_set_options", 00:22:09.211 "params": { 00:22:09.211 "small_cache_size": 128, 00:22:09.211 "large_cache_size": 16, 00:22:09.211 "task_count": 2048, 00:22:09.211 "sequence_count": 2048, 00:22:09.211 "buf_count": 2048 00:22:09.211 } 00:22:09.211 } 00:22:09.211 ] 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "subsystem": "bdev", 00:22:09.211 "config": [ 00:22:09.211 { 00:22:09.211 "method": "bdev_set_options", 00:22:09.211 "params": { 00:22:09.211 "bdev_io_pool_size": 65535, 00:22:09.211 "bdev_io_cache_size": 256, 00:22:09.211 "bdev_auto_examine": true, 00:22:09.211 "iobuf_small_cache_size": 128, 00:22:09.211 "iobuf_large_cache_size": 16 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "bdev_raid_set_options", 00:22:09.211 "params": { 00:22:09.211 "process_window_size_kb": 1024, 00:22:09.211 "process_max_bandwidth_mb_sec": 0 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "bdev_iscsi_set_options", 00:22:09.211 "params": { 00:22:09.211 "timeout_sec": 30 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "bdev_nvme_set_options", 00:22:09.211 "params": { 00:22:09.211 "action_on_timeout": "none", 00:22:09.211 "timeout_us": 0, 00:22:09.211 "timeout_admin_us": 0, 00:22:09.211 "keep_alive_timeout_ms": 10000, 00:22:09.211 "arbitration_burst": 0, 00:22:09.211 "low_priority_weight": 0, 00:22:09.211 "medium_priority_weight": 0, 00:22:09.211 "high_priority_weight": 0, 00:22:09.211 "nvme_adminq_poll_period_us": 10000, 00:22:09.211 "nvme_ioq_poll_period_us": 0, 00:22:09.211 "io_queue_requests": 0, 00:22:09.211 "delay_cmd_submit": true, 00:22:09.211 "transport_retry_count": 4, 00:22:09.211 "bdev_retry_count": 3, 00:22:09.211 "transport_ack_timeout": 0, 00:22:09.211 "ctrlr_loss_timeout_sec": 0, 00:22:09.211 "reconnect_delay_sec": 0, 00:22:09.211 "fast_io_fail_timeout_sec": 0, 00:22:09.211 "disable_auto_failback": false, 00:22:09.211 "generate_uuids": false, 00:22:09.211 "transport_tos": 0, 00:22:09.211 "nvme_error_stat": false, 00:22:09.211 "rdma_srq_size": 0, 00:22:09.211 "io_path_stat": false, 00:22:09.211 "allow_accel_sequence": false, 00:22:09.211 "rdma_max_cq_size": 0, 00:22:09.211 "rdma_cm_event_timeout_ms": 0, 00:22:09.211 "dhchap_digests": [ 00:22:09.211 "sha256", 00:22:09.211 "sha384", 00:22:09.211 "sha512" 00:22:09.211 ], 00:22:09.211 "dhchap_dhgroups": [ 00:22:09.211 "null", 00:22:09.211 "ffdhe2048", 00:22:09.211 "ffdhe3072", 00:22:09.211 "ffdhe4096", 00:22:09.211 "ffdhe6144", 00:22:09.211 "ffdhe8192" 00:22:09.211 ] 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "bdev_nvme_set_hotplug", 00:22:09.211 "params": { 00:22:09.211 "period_us": 100000, 00:22:09.211 "enable": false 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "bdev_malloc_create", 00:22:09.211 "params": { 00:22:09.211 "name": "malloc0", 00:22:09.211 "num_blocks": 8192, 00:22:09.211 "block_size": 4096, 00:22:09.211 "physical_block_size": 4096, 00:22:09.211 "uuid": "35868478-9b3a-4722-92b5-4cb2ebd52f7a", 00:22:09.211 "optimal_io_boundary": 0, 00:22:09.211 "md_size": 0, 00:22:09.211 "dif_type": 0, 00:22:09.211 "dif_is_head_of_md": false, 00:22:09.211 "dif_pi_format": 0 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "bdev_wait_for_examine" 00:22:09.211 } 00:22:09.211 ] 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "subsystem": "nbd", 00:22:09.211 "config": [] 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "subsystem": "scheduler", 00:22:09.211 "config": [ 00:22:09.211 { 00:22:09.211 "method": "framework_set_scheduler", 00:22:09.211 "params": { 00:22:09.211 "name": "static" 00:22:09.211 } 00:22:09.211 } 00:22:09.211 ] 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "subsystem": "nvmf", 00:22:09.211 "config": [ 00:22:09.211 { 00:22:09.211 "method": "nvmf_set_config", 00:22:09.211 "params": { 00:22:09.211 "discovery_filter": "match_any", 00:22:09.211 "admin_cmd_passthru": { 00:22:09.211 "identify_ctrlr": false 00:22:09.211 } 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "nvmf_set_max_subsystems", 00:22:09.211 "params": { 00:22:09.211 "max_subsystems": 1024 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "nvmf_set_crdt", 00:22:09.211 "params": { 00:22:09.211 "crdt1": 0, 00:22:09.211 "crdt2": 0, 00:22:09.211 "crdt3": 0 00:22:09.211 } 00:22:09.211 }, 00:22:09.211 { 00:22:09.211 "method": "nvmf_create_transport", 00:22:09.211 "params": { 00:22:09.211 "trtype": "TCP", 00:22:09.211 "max_queue_depth": 128, 00:22:09.211 "max_io_qpairs_per_ctrlr": 127, 00:22:09.211 "in_capsule_data_size": 4096, 00:22:09.211 "max_io_size": 131072, 00:22:09.211 "io_unit_size": 131072, 00:22:09.211 "max_aq_depth": 128, 00:22:09.211 "num_shared_buffers": 511, 00:22:09.211 "buf_cache_size": 4294967295, 00:22:09.211 "dif_insert_or_strip": false, 00:22:09.211 "zcopy": false, 00:22:09.212 "c2h_success": false, 00:22:09.212 "sock_priority": 0, 00:22:09.212 "abort_timeout_sec": 1, 00:22:09.212 "ack_timeout": 0, 00:22:09.212 "data_wr_pool_size": 0 00:22:09.212 } 00:22:09.212 }, 00:22:09.212 { 00:22:09.212 "method": "nvmf_create_subsystem", 00:22:09.212 "params": { 00:22:09.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.212 "allow_any_host": false, 00:22:09.212 "serial_number": "SPDK00000000000001", 00:22:09.212 "model_number": "SPDK bdev Controller", 00:22:09.212 "max_namespaces": 10, 00:22:09.212 "min_cntlid": 1, 00:22:09.212 "max_cntlid": 65519, 00:22:09.212 "ana_reporting": false 00:22:09.212 } 00:22:09.212 }, 00:22:09.212 { 00:22:09.212 "method": "nvmf_subsystem_add_host", 00:22:09.212 "params": { 00:22:09.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.212 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.212 "psk": "/tmp/tmp.pqHjbzYrmH" 00:22:09.212 } 00:22:09.212 }, 00:22:09.212 { 00:22:09.212 "method": "nvmf_subsystem_add_ns", 00:22:09.212 "params": { 00:22:09.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.212 "namespace": { 00:22:09.212 "nsid": 1, 00:22:09.212 "bdev_name": "malloc0", 00:22:09.212 "nguid": "358684789B3A472292B54CB2EBD52F7A", 00:22:09.212 "uuid": "35868478-9b3a-4722-92b5-4cb2ebd52f7a", 00:22:09.212 "no_auto_visible": false 00:22:09.212 } 00:22:09.212 } 00:22:09.212 }, 00:22:09.212 { 00:22:09.212 "method": "nvmf_subsystem_add_listener", 00:22:09.212 "params": { 00:22:09.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.212 "listen_address": { 00:22:09.212 "trtype": "TCP", 00:22:09.212 "adrfam": "IPv4", 00:22:09.212 "traddr": "10.0.0.2", 00:22:09.212 "trsvcid": "4420" 00:22:09.212 }, 00:22:09.212 "secure_channel": true 00:22:09.212 } 00:22:09.212 } 00:22:09.212 ] 00:22:09.212 } 00:22:09.212 ] 00:22:09.212 }' 00:22:09.212 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:09.472 "subsystems": [ 00:22:09.472 { 00:22:09.472 "subsystem": "keyring", 00:22:09.472 "config": [] 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "subsystem": "iobuf", 00:22:09.472 "config": [ 00:22:09.472 { 00:22:09.472 "method": "iobuf_set_options", 00:22:09.472 "params": { 00:22:09.472 "small_pool_count": 8192, 00:22:09.472 "large_pool_count": 1024, 00:22:09.472 "small_bufsize": 8192, 00:22:09.472 "large_bufsize": 135168 00:22:09.472 } 00:22:09.472 } 00:22:09.472 ] 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "subsystem": "sock", 00:22:09.472 "config": [ 00:22:09.472 { 00:22:09.472 "method": "sock_set_default_impl", 00:22:09.472 "params": { 00:22:09.472 "impl_name": "posix" 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "sock_impl_set_options", 00:22:09.472 "params": { 00:22:09.472 "impl_name": "ssl", 00:22:09.472 "recv_buf_size": 4096, 00:22:09.472 "send_buf_size": 4096, 00:22:09.472 "enable_recv_pipe": true, 00:22:09.472 "enable_quickack": false, 00:22:09.472 "enable_placement_id": 0, 00:22:09.472 "enable_zerocopy_send_server": true, 00:22:09.472 "enable_zerocopy_send_client": false, 00:22:09.472 "zerocopy_threshold": 0, 00:22:09.472 "tls_version": 0, 00:22:09.472 "enable_ktls": false 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "sock_impl_set_options", 00:22:09.472 "params": { 00:22:09.472 "impl_name": "posix", 00:22:09.472 "recv_buf_size": 2097152, 00:22:09.472 "send_buf_size": 2097152, 00:22:09.472 "enable_recv_pipe": true, 00:22:09.472 "enable_quickack": false, 00:22:09.472 "enable_placement_id": 0, 00:22:09.472 "enable_zerocopy_send_server": true, 00:22:09.472 "enable_zerocopy_send_client": false, 00:22:09.472 "zerocopy_threshold": 0, 00:22:09.472 "tls_version": 0, 00:22:09.472 "enable_ktls": false 00:22:09.472 } 00:22:09.472 } 00:22:09.472 ] 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "subsystem": "vmd", 00:22:09.472 "config": [] 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "subsystem": "accel", 00:22:09.472 "config": [ 00:22:09.472 { 00:22:09.472 "method": "accel_set_options", 00:22:09.472 "params": { 00:22:09.472 "small_cache_size": 128, 00:22:09.472 "large_cache_size": 16, 00:22:09.472 "task_count": 2048, 00:22:09.472 "sequence_count": 2048, 00:22:09.472 "buf_count": 2048 00:22:09.472 } 00:22:09.472 } 00:22:09.472 ] 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "subsystem": "bdev", 00:22:09.472 "config": [ 00:22:09.472 { 00:22:09.472 "method": "bdev_set_options", 00:22:09.472 "params": { 00:22:09.472 "bdev_io_pool_size": 65535, 00:22:09.472 "bdev_io_cache_size": 256, 00:22:09.472 "bdev_auto_examine": true, 00:22:09.472 "iobuf_small_cache_size": 128, 00:22:09.472 "iobuf_large_cache_size": 16 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "bdev_raid_set_options", 00:22:09.472 "params": { 00:22:09.472 "process_window_size_kb": 1024, 00:22:09.472 "process_max_bandwidth_mb_sec": 0 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "bdev_iscsi_set_options", 00:22:09.472 "params": { 00:22:09.472 "timeout_sec": 30 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "bdev_nvme_set_options", 00:22:09.472 "params": { 00:22:09.472 "action_on_timeout": "none", 00:22:09.472 "timeout_us": 0, 00:22:09.472 "timeout_admin_us": 0, 00:22:09.472 "keep_alive_timeout_ms": 10000, 00:22:09.472 "arbitration_burst": 0, 00:22:09.472 "low_priority_weight": 0, 00:22:09.472 "medium_priority_weight": 0, 00:22:09.472 "high_priority_weight": 0, 00:22:09.472 "nvme_adminq_poll_period_us": 10000, 00:22:09.472 "nvme_ioq_poll_period_us": 0, 00:22:09.472 "io_queue_requests": 512, 00:22:09.472 "delay_cmd_submit": true, 00:22:09.472 "transport_retry_count": 4, 00:22:09.472 "bdev_retry_count": 3, 00:22:09.472 "transport_ack_timeout": 0, 00:22:09.472 "ctrlr_loss_timeout_sec": 0, 00:22:09.472 "reconnect_delay_sec": 0, 00:22:09.472 "fast_io_fail_timeout_sec": 0, 00:22:09.472 "disable_auto_failback": false, 00:22:09.472 "generate_uuids": false, 00:22:09.472 "transport_tos": 0, 00:22:09.472 "nvme_error_stat": false, 00:22:09.472 "rdma_srq_size": 0, 00:22:09.472 "io_path_stat": false, 00:22:09.472 "allow_accel_sequence": false, 00:22:09.472 "rdma_max_cq_size": 0, 00:22:09.472 "rdma_cm_event_timeout_ms": 0, 00:22:09.472 "dhchap_digests": [ 00:22:09.472 "sha256", 00:22:09.472 "sha384", 00:22:09.472 "sha512" 00:22:09.472 ], 00:22:09.472 "dhchap_dhgroups": [ 00:22:09.472 "null", 00:22:09.472 "ffdhe2048", 00:22:09.472 "ffdhe3072", 00:22:09.472 "ffdhe4096", 00:22:09.472 "ffdhe6144", 00:22:09.472 "ffdhe8192" 00:22:09.472 ] 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "bdev_nvme_attach_controller", 00:22:09.472 "params": { 00:22:09.472 "name": "TLSTEST", 00:22:09.472 "trtype": "TCP", 00:22:09.472 "adrfam": "IPv4", 00:22:09.472 "traddr": "10.0.0.2", 00:22:09.472 "trsvcid": "4420", 00:22:09.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.472 "prchk_reftag": false, 00:22:09.472 "prchk_guard": false, 00:22:09.472 "ctrlr_loss_timeout_sec": 0, 00:22:09.472 "reconnect_delay_sec": 0, 00:22:09.472 "fast_io_fail_timeout_sec": 0, 00:22:09.472 "psk": "/tmp/tmp.pqHjbzYrmH", 00:22:09.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.472 "hdgst": false, 00:22:09.472 "ddgst": false 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "bdev_nvme_set_hotplug", 00:22:09.472 "params": { 00:22:09.472 "period_us": 100000, 00:22:09.472 "enable": false 00:22:09.472 } 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "method": "bdev_wait_for_examine" 00:22:09.472 } 00:22:09.472 ] 00:22:09.472 }, 00:22:09.472 { 00:22:09.472 "subsystem": "nbd", 00:22:09.472 "config": [] 00:22:09.472 } 00:22:09.472 ] 00:22:09.472 }' 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 135952 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 135952 ']' 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 135952 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 135952 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 135952' 00:22:09.472 killing process with pid 135952 00:22:09.472 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 135952 00:22:09.472 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.472 00:22:09.472 Latency(us) 00:22:09.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.472 =================================================================================================================== 00:22:09.472 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.472 [2024-07-25 07:28:16.710376] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:09.473 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 135952 00:22:09.473 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 135589 00:22:09.473 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 135589 ']' 00:22:09.473 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 135589 00:22:09.473 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:09.473 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.473 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 135589 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 135589' 00:22:09.734 killing process with pid 135589 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 135589 00:22:09.734 [2024-07-25 07:28:16.879349] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 135589 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.734 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:09.734 "subsystems": [ 00:22:09.734 { 00:22:09.734 "subsystem": "keyring", 00:22:09.734 "config": [] 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "subsystem": "iobuf", 00:22:09.734 "config": [ 00:22:09.734 { 00:22:09.734 "method": "iobuf_set_options", 00:22:09.734 "params": { 00:22:09.734 "small_pool_count": 8192, 00:22:09.734 "large_pool_count": 1024, 00:22:09.734 "small_bufsize": 8192, 00:22:09.734 "large_bufsize": 135168 00:22:09.734 } 00:22:09.734 } 00:22:09.734 ] 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "subsystem": "sock", 00:22:09.734 "config": [ 00:22:09.734 { 00:22:09.734 "method": "sock_set_default_impl", 00:22:09.734 "params": { 00:22:09.734 "impl_name": "posix" 00:22:09.734 } 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "method": "sock_impl_set_options", 00:22:09.734 "params": { 00:22:09.734 "impl_name": "ssl", 00:22:09.734 "recv_buf_size": 4096, 00:22:09.734 "send_buf_size": 4096, 00:22:09.734 "enable_recv_pipe": true, 00:22:09.734 "enable_quickack": false, 00:22:09.734 "enable_placement_id": 0, 00:22:09.734 "enable_zerocopy_send_server": true, 00:22:09.734 "enable_zerocopy_send_client": false, 00:22:09.734 "zerocopy_threshold": 0, 00:22:09.734 "tls_version": 0, 00:22:09.734 "enable_ktls": false 00:22:09.734 } 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "method": "sock_impl_set_options", 00:22:09.734 "params": { 00:22:09.734 "impl_name": "posix", 00:22:09.734 "recv_buf_size": 2097152, 00:22:09.734 "send_buf_size": 2097152, 00:22:09.734 "enable_recv_pipe": true, 00:22:09.734 "enable_quickack": false, 00:22:09.734 "enable_placement_id": 0, 00:22:09.734 "enable_zerocopy_send_server": true, 00:22:09.734 "enable_zerocopy_send_client": false, 00:22:09.734 "zerocopy_threshold": 0, 00:22:09.734 "tls_version": 0, 00:22:09.734 "enable_ktls": false 00:22:09.734 } 00:22:09.734 } 00:22:09.734 ] 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "subsystem": "vmd", 00:22:09.734 "config": [] 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "subsystem": "accel", 00:22:09.734 "config": [ 00:22:09.734 { 00:22:09.734 "method": "accel_set_options", 00:22:09.734 "params": { 00:22:09.734 "small_cache_size": 128, 00:22:09.734 "large_cache_size": 16, 00:22:09.734 "task_count": 2048, 00:22:09.734 "sequence_count": 2048, 00:22:09.734 "buf_count": 2048 00:22:09.734 } 00:22:09.734 } 00:22:09.734 ] 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "subsystem": "bdev", 00:22:09.734 "config": [ 00:22:09.734 { 00:22:09.734 "method": "bdev_set_options", 00:22:09.734 "params": { 00:22:09.734 "bdev_io_pool_size": 65535, 00:22:09.734 "bdev_io_cache_size": 256, 00:22:09.734 "bdev_auto_examine": true, 00:22:09.734 "iobuf_small_cache_size": 128, 00:22:09.734 "iobuf_large_cache_size": 16 00:22:09.734 } 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "method": "bdev_raid_set_options", 00:22:09.734 "params": { 00:22:09.734 "process_window_size_kb": 1024, 00:22:09.734 "process_max_bandwidth_mb_sec": 0 00:22:09.734 } 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "method": "bdev_iscsi_set_options", 00:22:09.734 "params": { 00:22:09.734 "timeout_sec": 30 00:22:09.734 } 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "method": "bdev_nvme_set_options", 00:22:09.734 "params": { 00:22:09.734 "action_on_timeout": "none", 00:22:09.734 "timeout_us": 0, 00:22:09.734 "timeout_admin_us": 0, 00:22:09.734 "keep_alive_timeout_ms": 10000, 00:22:09.734 "arbitration_burst": 0, 00:22:09.734 "low_priority_weight": 0, 00:22:09.734 "medium_priority_weight": 0, 00:22:09.734 "high_priority_weight": 0, 00:22:09.734 "nvme_adminq_poll_period_us": 10000, 00:22:09.734 "nvme_ioq_poll_period_us": 0, 00:22:09.734 "io_queue_requests": 0, 00:22:09.734 "delay_cmd_submit": true, 00:22:09.734 "transport_retry_count": 4, 00:22:09.734 "bdev_retry_count": 3, 00:22:09.734 "transport_ack_timeout": 0, 00:22:09.734 "ctrlr_loss_timeout_sec": 0, 00:22:09.734 "reconnect_delay_sec": 0, 00:22:09.734 "fast_io_fail_timeout_sec": 0, 00:22:09.734 "disable_auto_failback": false, 00:22:09.734 "generate_uuids": false, 00:22:09.734 "transport_tos": 0, 00:22:09.734 "nvme_error_stat": false, 00:22:09.734 "rdma_srq_size": 0, 00:22:09.734 "io_path_stat": false, 00:22:09.734 "allow_accel_sequence": false, 00:22:09.734 "rdma_max_cq_size": 0, 00:22:09.734 "rdma_cm_event_timeout_ms": 0, 00:22:09.734 "dhchap_digests": [ 00:22:09.734 "sha256", 00:22:09.734 "sha384", 00:22:09.734 "sha512" 00:22:09.734 ], 00:22:09.734 "dhchap_dhgroups": [ 00:22:09.734 "null", 00:22:09.734 "ffdhe2048", 00:22:09.734 "ffdhe3072", 00:22:09.734 "ffdhe4096", 00:22:09.734 "ffdhe6144", 00:22:09.734 "ffdhe8192" 00:22:09.734 ] 00:22:09.734 } 00:22:09.734 }, 00:22:09.734 { 00:22:09.734 "method": "bdev_nvme_set_hotplug", 00:22:09.734 "params": { 00:22:09.734 "period_us": 100000, 00:22:09.734 "enable": false 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "bdev_malloc_create", 00:22:09.735 "params": { 00:22:09.735 "name": "malloc0", 00:22:09.735 "num_blocks": 8192, 00:22:09.735 "block_size": 4096, 00:22:09.735 "physical_block_size": 4096, 00:22:09.735 "uuid": "35868478-9b3a-4722-92b5-4cb2ebd52f7a", 00:22:09.735 "optimal_io_boundary": 0, 00:22:09.735 "md_size": 0, 00:22:09.735 "dif_type": 0, 00:22:09.735 "dif_is_head_of_md": false, 00:22:09.735 "dif_pi_format": 0 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "bdev_wait_for_examine" 00:22:09.735 } 00:22:09.735 ] 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "subsystem": "nbd", 00:22:09.735 "config": [] 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "subsystem": "scheduler", 00:22:09.735 "config": [ 00:22:09.735 { 00:22:09.735 "method": "framework_set_scheduler", 00:22:09.735 "params": { 00:22:09.735 "name": "static" 00:22:09.735 } 00:22:09.735 } 00:22:09.735 ] 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "subsystem": "nvmf", 00:22:09.735 "config": [ 00:22:09.735 { 00:22:09.735 "method": "nvmf_set_config", 00:22:09.735 "params": { 00:22:09.735 "discovery_filter": "match_any", 00:22:09.735 "admin_cmd_passthru": { 00:22:09.735 "identify_ctrlr": false 00:22:09.735 } 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "nvmf_set_max_subsystems", 00:22:09.735 "params": { 00:22:09.735 "max_subsystems": 1024 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "nvmf_set_crdt", 00:22:09.735 "params": { 00:22:09.735 "crdt1": 0, 00:22:09.735 "crdt2": 0, 00:22:09.735 "crdt3": 0 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "nvmf_create_transport", 00:22:09.735 "params": { 00:22:09.735 "trtype": "TCP", 00:22:09.735 "max_queue_depth": 128, 00:22:09.735 "max_io_qpairs_per_ctrlr": 127, 00:22:09.735 "in_capsule_data_size": 4096, 00:22:09.735 "max_io_size": 131072, 00:22:09.735 "io_unit_size": 131072, 00:22:09.735 "max_aq_depth": 128, 00:22:09.735 "num_shared_buffers": 511, 00:22:09.735 "buf_cache_size": 4294967295, 00:22:09.735 "dif_insert_or_strip": false, 00:22:09.735 "zcopy": false, 00:22:09.735 "c2h_success": false, 00:22:09.735 "sock_priority": 0, 00:22:09.735 "abort_timeout_sec": 1, 00:22:09.735 "ack_timeout": 0, 00:22:09.735 "data_wr_pool_size": 0 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "nvmf_create_subsystem", 00:22:09.735 "params": { 00:22:09.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.735 "allow_any_host": false, 00:22:09.735 "serial_number": "SPDK00000000000001", 00:22:09.735 "model_number": "SPDK bdev Controller", 00:22:09.735 "max_namespaces": 10, 00:22:09.735 "min_cntlid": 1, 00:22:09.735 "max_cntlid": 65519, 00:22:09.735 "ana_reporting": false 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "nvmf_subsystem_add_host", 00:22:09.735 "params": { 00:22:09.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.735 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.735 "psk": "/tmp/tmp.pqHjbzYrmH" 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "nvmf_subsystem_add_ns", 00:22:09.735 "params": { 00:22:09.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.735 "namespace": { 00:22:09.735 "nsid": 1, 00:22:09.735 "bdev_name": "malloc0", 00:22:09.735 "nguid": "358684789B3A472292B54CB2EBD52F7A", 00:22:09.735 "uuid": "35868478-9b3a-4722-92b5-4cb2ebd52f7a", 00:22:09.735 "no_auto_visible": false 00:22:09.735 } 00:22:09.735 } 00:22:09.735 }, 00:22:09.735 { 00:22:09.735 "method": "nvmf_subsystem_add_listener", 00:22:09.735 "params": { 00:22:09.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.735 "listen_address": { 00:22:09.735 "trtype": "TCP", 00:22:09.735 "adrfam": "IPv4", 00:22:09.735 "traddr": "10.0.0.2", 00:22:09.735 "trsvcid": "4420" 00:22:09.735 }, 00:22:09.735 "secure_channel": true 00:22:09.735 } 00:22:09.735 } 00:22:09.735 ] 00:22:09.735 } 00:22:09.735 ] 00:22:09.735 }' 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=136179 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 136179 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 136179 ']' 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.735 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.735 [2024-07-25 07:28:17.060016] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:09.735 [2024-07-25 07:28:17.060071] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.735 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.996 [2024-07-25 07:28:17.144010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.996 [2024-07-25 07:28:17.198055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.996 [2024-07-25 07:28:17.198087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.996 [2024-07-25 07:28:17.198092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.996 [2024-07-25 07:28:17.198097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.996 [2024-07-25 07:28:17.198101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.996 [2024-07-25 07:28:17.198141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.256 [2024-07-25 07:28:17.381408] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.256 [2024-07-25 07:28:17.404225] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:10.256 [2024-07-25 07:28:17.420278] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.256 [2024-07-25 07:28:17.420470] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=136333 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 136333 /var/tmp/bdevperf.sock 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 136333 ']' 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.517 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:10.517 "subsystems": [ 00:22:10.517 { 00:22:10.517 "subsystem": "keyring", 00:22:10.517 "config": [] 00:22:10.517 }, 00:22:10.517 { 00:22:10.517 "subsystem": "iobuf", 00:22:10.517 "config": [ 00:22:10.517 { 00:22:10.517 "method": "iobuf_set_options", 00:22:10.517 "params": { 00:22:10.517 "small_pool_count": 8192, 00:22:10.517 "large_pool_count": 1024, 00:22:10.517 "small_bufsize": 8192, 00:22:10.517 "large_bufsize": 135168 00:22:10.517 } 00:22:10.518 } 00:22:10.518 ] 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "subsystem": "sock", 00:22:10.518 "config": [ 00:22:10.518 { 00:22:10.518 "method": "sock_set_default_impl", 00:22:10.518 "params": { 00:22:10.518 "impl_name": "posix" 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "sock_impl_set_options", 00:22:10.518 "params": { 00:22:10.518 "impl_name": "ssl", 00:22:10.518 "recv_buf_size": 4096, 00:22:10.518 "send_buf_size": 4096, 00:22:10.518 "enable_recv_pipe": true, 00:22:10.518 "enable_quickack": false, 00:22:10.518 "enable_placement_id": 0, 00:22:10.518 "enable_zerocopy_send_server": true, 00:22:10.518 "enable_zerocopy_send_client": false, 00:22:10.518 "zerocopy_threshold": 0, 00:22:10.518 "tls_version": 0, 00:22:10.518 "enable_ktls": false 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "sock_impl_set_options", 00:22:10.518 "params": { 00:22:10.518 "impl_name": "posix", 00:22:10.518 "recv_buf_size": 2097152, 00:22:10.518 "send_buf_size": 2097152, 00:22:10.518 "enable_recv_pipe": true, 00:22:10.518 "enable_quickack": false, 00:22:10.518 "enable_placement_id": 0, 00:22:10.518 "enable_zerocopy_send_server": true, 00:22:10.518 "enable_zerocopy_send_client": false, 00:22:10.518 "zerocopy_threshold": 0, 00:22:10.518 "tls_version": 0, 00:22:10.518 "enable_ktls": false 00:22:10.518 } 00:22:10.518 } 00:22:10.518 ] 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "subsystem": "vmd", 00:22:10.518 "config": [] 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "subsystem": "accel", 00:22:10.518 "config": [ 00:22:10.518 { 00:22:10.518 "method": "accel_set_options", 00:22:10.518 "params": { 00:22:10.518 "small_cache_size": 128, 00:22:10.518 "large_cache_size": 16, 00:22:10.518 "task_count": 2048, 00:22:10.518 "sequence_count": 2048, 00:22:10.518 "buf_count": 2048 00:22:10.518 } 00:22:10.518 } 00:22:10.518 ] 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "subsystem": "bdev", 00:22:10.518 "config": [ 00:22:10.518 { 00:22:10.518 "method": "bdev_set_options", 00:22:10.518 "params": { 00:22:10.518 "bdev_io_pool_size": 65535, 00:22:10.518 "bdev_io_cache_size": 256, 00:22:10.518 "bdev_auto_examine": true, 00:22:10.518 "iobuf_small_cache_size": 128, 00:22:10.518 "iobuf_large_cache_size": 16 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "bdev_raid_set_options", 00:22:10.518 "params": { 00:22:10.518 "process_window_size_kb": 1024, 00:22:10.518 "process_max_bandwidth_mb_sec": 0 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "bdev_iscsi_set_options", 00:22:10.518 "params": { 00:22:10.518 "timeout_sec": 30 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "bdev_nvme_set_options", 00:22:10.518 "params": { 00:22:10.518 "action_on_timeout": "none", 00:22:10.518 "timeout_us": 0, 00:22:10.518 "timeout_admin_us": 0, 00:22:10.518 "keep_alive_timeout_ms": 10000, 00:22:10.518 "arbitration_burst": 0, 00:22:10.518 "low_priority_weight": 0, 00:22:10.518 "medium_priority_weight": 0, 00:22:10.518 "high_priority_weight": 0, 00:22:10.518 "nvme_adminq_poll_period_us": 10000, 00:22:10.518 "nvme_ioq_poll_period_us": 0, 00:22:10.518 "io_queue_requests": 512, 00:22:10.518 "delay_cmd_submit": true, 00:22:10.518 "transport_retry_count": 4, 00:22:10.518 "bdev_retry_count": 3, 00:22:10.518 "transport_ack_timeout": 0, 00:22:10.518 "ctrlr_loss_timeout_sec": 0, 00:22:10.518 "reconnect_delay_sec": 0, 00:22:10.518 "fast_io_fail_timeout_sec": 0, 00:22:10.518 "disable_auto_failback": false, 00:22:10.518 "generate_uuids": false, 00:22:10.518 "transport_tos": 0, 00:22:10.518 "nvme_error_stat": false, 00:22:10.518 "rdma_srq_size": 0, 00:22:10.518 "io_path_stat": false, 00:22:10.518 "allow_accel_sequence": false, 00:22:10.518 "rdma_max_cq_size": 0, 00:22:10.518 "rdma_cm_event_timeout_ms": 0, 00:22:10.518 "dhchap_digests": [ 00:22:10.518 "sha256", 00:22:10.518 "sha384", 00:22:10.518 "sha512" 00:22:10.518 ], 00:22:10.518 "dhchap_dhgroups": [ 00:22:10.518 "null", 00:22:10.518 "ffdhe2048", 00:22:10.518 "ffdhe3072", 00:22:10.518 "ffdhe4096", 00:22:10.518 "ffdhe6144", 00:22:10.518 "ffdhe8192" 00:22:10.518 ] 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "bdev_nvme_attach_controller", 00:22:10.518 "params": { 00:22:10.518 "name": "TLSTEST", 00:22:10.518 "trtype": "TCP", 00:22:10.518 "adrfam": "IPv4", 00:22:10.518 "traddr": "10.0.0.2", 00:22:10.518 "trsvcid": "4420", 00:22:10.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.518 "prchk_reftag": false, 00:22:10.518 "prchk_guard": false, 00:22:10.518 "ctrlr_loss_timeout_sec": 0, 00:22:10.518 "reconnect_delay_sec": 0, 00:22:10.518 "fast_io_fail_timeout_sec": 0, 00:22:10.518 "psk": "/tmp/tmp.pqHjbzYrmH", 00:22:10.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.518 "hdgst": false, 00:22:10.518 "ddgst": false 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "bdev_nvme_set_hotplug", 00:22:10.518 "params": { 00:22:10.518 "period_us": 100000, 00:22:10.518 "enable": false 00:22:10.518 } 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "method": "bdev_wait_for_examine" 00:22:10.518 } 00:22:10.518 ] 00:22:10.518 }, 00:22:10.518 { 00:22:10.518 "subsystem": "nbd", 00:22:10.518 "config": [] 00:22:10.518 } 00:22:10.518 ] 00:22:10.518 }' 00:22:10.779 [2024-07-25 07:28:17.917179] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:10.779 [2024-07-25 07:28:17.917259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136333 ] 00:22:10.779 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.779 [2024-07-25 07:28:17.968893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.779 [2024-07-25 07:28:18.021935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.779 [2024-07-25 07:28:18.146515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.779 [2024-07-25 07:28:18.146577] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:11.350 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.350 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:11.350 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:11.611 Running I/O for 10 seconds... 00:22:21.611 00:22:21.611 Latency(us) 00:22:21.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.611 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.611 Verification LBA range: start 0x0 length 0x2000 00:22:21.611 TLSTESTn1 : 10.07 2158.02 8.43 0.00 0.00 59109.27 4942.51 131945.81 00:22:21.611 =================================================================================================================== 00:22:21.611 Total : 2158.02 8.43 0.00 0.00 59109.27 4942.51 131945.81 00:22:21.611 0 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 136333 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 136333 ']' 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 136333 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136333 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136333' 00:22:21.611 killing process with pid 136333 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 136333 00:22:21.611 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.611 00:22:21.611 Latency(us) 00:22:21.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.611 =================================================================================================================== 00:22:21.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.611 [2024-07-25 07:28:28.939019] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:21.611 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 136333 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 136179 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 136179 ']' 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 136179 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 136179 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 136179' 00:22:21.872 killing process with pid 136179 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 136179 00:22:21.872 [2024-07-25 07:28:29.108133] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 136179 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=138624 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 138624 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 138624 ']' 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.872 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.132 [2024-07-25 07:28:29.294964] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:22.132 [2024-07-25 07:28:29.295019] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.132 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.132 [2024-07-25 07:28:29.359395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.132 [2024-07-25 07:28:29.422664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.132 [2024-07-25 07:28:29.422703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.132 [2024-07-25 07:28:29.422711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.132 [2024-07-25 07:28:29.422717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.132 [2024-07-25 07:28:29.422723] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.132 [2024-07-25 07:28:29.422743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.705 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.705 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:22.705 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.705 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:22.705 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.025 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.025 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.pqHjbzYrmH 00:22:23.025 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.pqHjbzYrmH 00:22:23.025 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.025 [2024-07-25 07:28:30.241607] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.025 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.286 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.286 [2024-07-25 07:28:30.570430] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.286 [2024-07-25 07:28:30.570653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.286 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.547 malloc0 00:22:23.547 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:23.807 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pqHjbzYrmH 00:22:23.807 [2024-07-25 07:28:31.078403] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=139037 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 139037 /var/tmp/bdevperf.sock 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 139037 ']' 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.807 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.807 [2024-07-25 07:28:31.156636] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:23.807 [2024-07-25 07:28:31.156687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139037 ] 00:22:24.068 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.068 [2024-07-25 07:28:31.232786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.068 [2024-07-25 07:28:31.285911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.639 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.639 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.639 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pqHjbzYrmH 00:22:24.900 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:24.900 [2024-07-25 07:28:32.199898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.161 nvme0n1 00:22:25.161 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:25.161 Running I/O for 1 seconds... 00:22:26.546 00:22:26.546 Latency(us) 00:22:26.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.546 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:26.546 Verification LBA range: start 0x0 length 0x2000 00:22:26.546 nvme0n1 : 1.07 1679.78 6.56 0.00 0.00 73874.96 6062.08 124081.49 00:22:26.546 =================================================================================================================== 00:22:26.546 Total : 1679.78 6.56 0.00 0.00 73874.96 6062.08 124081.49 00:22:26.546 0 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 139037 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 139037 ']' 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 139037 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139037 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139037' 00:22:26.546 killing process with pid 139037 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 139037 00:22:26.546 Received shutdown signal, test time was about 1.000000 seconds 00:22:26.546 00:22:26.546 Latency(us) 00:22:26.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.546 =================================================================================================================== 00:22:26.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 139037 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 138624 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 138624 ']' 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 138624 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 138624 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 138624' 00:22:26.546 killing process with pid 138624 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 138624 00:22:26.546 [2024-07-25 07:28:33.707668] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 138624 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=139422 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 139422 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 139422 ']' 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.546 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.546 [2024-07-25 07:28:33.907296] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:26.546 [2024-07-25 07:28:33.907351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.807 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.807 [2024-07-25 07:28:33.972436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.807 [2024-07-25 07:28:34.036248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.807 [2024-07-25 07:28:34.036285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.807 [2024-07-25 07:28:34.036293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.807 [2024-07-25 07:28:34.036300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.807 [2024-07-25 07:28:34.036305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.807 [2024-07-25 07:28:34.036325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.378 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.378 [2024-07-25 07:28:34.731290] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.378 malloc0 00:22:27.639 [2024-07-25 07:28:34.758094] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:27.639 [2024-07-25 07:28:34.765407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=139740 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 139740 /var/tmp/bdevperf.sock 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 139740 ']' 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.639 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.639 [2024-07-25 07:28:34.838955] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:27.639 [2024-07-25 07:28:34.839001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139740 ] 00:22:27.639 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.639 [2024-07-25 07:28:34.911234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.639 [2024-07-25 07:28:34.964733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.596 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.596 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:28.596 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pqHjbzYrmH 00:22:28.596 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:28.596 [2024-07-25 07:28:35.894460] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.857 nvme0n1 00:22:28.857 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:28.857 Running I/O for 1 seconds... 00:22:29.799 00:22:29.799 Latency(us) 00:22:29.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.799 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:29.799 Verification LBA range: start 0x0 length 0x2000 00:22:29.799 nvme0n1 : 1.06 1574.92 6.15 0.00 0.00 79405.87 6089.39 132819.63 00:22:29.800 =================================================================================================================== 00:22:29.800 Total : 1574.92 6.15 0.00 0.00 79405.87 6089.39 132819.63 00:22:29.800 0 00:22:29.800 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:29.800 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.800 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.060 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.060 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:30.060 "subsystems": [ 00:22:30.060 { 00:22:30.060 "subsystem": "keyring", 00:22:30.060 "config": [ 00:22:30.060 { 00:22:30.060 "method": "keyring_file_add_key", 00:22:30.060 "params": { 00:22:30.060 "name": "key0", 00:22:30.060 "path": "/tmp/tmp.pqHjbzYrmH" 00:22:30.060 } 00:22:30.060 } 00:22:30.060 ] 00:22:30.060 }, 00:22:30.060 { 00:22:30.060 "subsystem": "iobuf", 00:22:30.060 "config": [ 00:22:30.060 { 00:22:30.060 "method": "iobuf_set_options", 00:22:30.060 "params": { 00:22:30.060 "small_pool_count": 8192, 00:22:30.060 "large_pool_count": 1024, 00:22:30.060 "small_bufsize": 8192, 00:22:30.060 "large_bufsize": 135168 00:22:30.060 } 00:22:30.061 } 00:22:30.061 ] 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "subsystem": "sock", 00:22:30.061 "config": [ 00:22:30.061 { 00:22:30.061 "method": "sock_set_default_impl", 00:22:30.061 "params": { 00:22:30.061 "impl_name": "posix" 00:22:30.061 } 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "method": "sock_impl_set_options", 00:22:30.061 "params": { 00:22:30.061 "impl_name": "ssl", 00:22:30.061 "recv_buf_size": 4096, 00:22:30.061 "send_buf_size": 4096, 00:22:30.061 "enable_recv_pipe": true, 00:22:30.061 "enable_quickack": false, 00:22:30.061 "enable_placement_id": 0, 00:22:30.061 "enable_zerocopy_send_server": true, 00:22:30.061 "enable_zerocopy_send_client": false, 00:22:30.061 "zerocopy_threshold": 0, 00:22:30.061 "tls_version": 0, 00:22:30.061 "enable_ktls": false 00:22:30.061 } 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "method": "sock_impl_set_options", 00:22:30.061 "params": { 00:22:30.061 "impl_name": "posix", 00:22:30.061 "recv_buf_size": 2097152, 00:22:30.061 "send_buf_size": 2097152, 00:22:30.061 "enable_recv_pipe": true, 00:22:30.061 "enable_quickack": false, 00:22:30.061 "enable_placement_id": 0, 00:22:30.061 "enable_zerocopy_send_server": true, 00:22:30.061 "enable_zerocopy_send_client": false, 00:22:30.061 "zerocopy_threshold": 0, 00:22:30.061 "tls_version": 0, 00:22:30.061 "enable_ktls": false 00:22:30.061 } 00:22:30.061 } 00:22:30.061 ] 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "subsystem": "vmd", 00:22:30.061 "config": [] 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "subsystem": "accel", 00:22:30.061 "config": [ 00:22:30.061 { 00:22:30.061 "method": "accel_set_options", 00:22:30.061 "params": { 00:22:30.061 "small_cache_size": 128, 00:22:30.061 "large_cache_size": 16, 00:22:30.061 "task_count": 2048, 00:22:30.061 "sequence_count": 2048, 00:22:30.061 "buf_count": 2048 00:22:30.061 } 00:22:30.061 } 00:22:30.061 ] 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "subsystem": "bdev", 00:22:30.061 "config": [ 00:22:30.061 { 00:22:30.061 "method": "bdev_set_options", 00:22:30.061 "params": { 00:22:30.061 "bdev_io_pool_size": 65535, 00:22:30.061 "bdev_io_cache_size": 256, 00:22:30.061 "bdev_auto_examine": true, 00:22:30.061 "iobuf_small_cache_size": 128, 00:22:30.061 "iobuf_large_cache_size": 16 00:22:30.061 } 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "method": "bdev_raid_set_options", 00:22:30.061 "params": { 00:22:30.061 "process_window_size_kb": 1024, 00:22:30.061 "process_max_bandwidth_mb_sec": 0 00:22:30.061 } 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "method": "bdev_iscsi_set_options", 00:22:30.061 "params": { 00:22:30.061 "timeout_sec": 30 00:22:30.061 } 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "method": "bdev_nvme_set_options", 00:22:30.061 "params": { 00:22:30.061 "action_on_timeout": "none", 00:22:30.061 "timeout_us": 0, 00:22:30.061 "timeout_admin_us": 0, 00:22:30.061 "keep_alive_timeout_ms": 10000, 00:22:30.061 "arbitration_burst": 0, 00:22:30.061 "low_priority_weight": 0, 00:22:30.061 "medium_priority_weight": 0, 00:22:30.061 "high_priority_weight": 0, 00:22:30.061 "nvme_adminq_poll_period_us": 10000, 00:22:30.061 "nvme_ioq_poll_period_us": 0, 00:22:30.061 "io_queue_requests": 0, 00:22:30.061 "delay_cmd_submit": true, 00:22:30.061 "transport_retry_count": 4, 00:22:30.061 "bdev_retry_count": 3, 00:22:30.061 "transport_ack_timeout": 0, 00:22:30.061 "ctrlr_loss_timeout_sec": 0, 00:22:30.061 "reconnect_delay_sec": 0, 00:22:30.061 "fast_io_fail_timeout_sec": 0, 00:22:30.061 "disable_auto_failback": false, 00:22:30.061 "generate_uuids": false, 00:22:30.061 "transport_tos": 0, 00:22:30.061 "nvme_error_stat": false, 00:22:30.061 "rdma_srq_size": 0, 00:22:30.061 "io_path_stat": false, 00:22:30.061 "allow_accel_sequence": false, 00:22:30.061 "rdma_max_cq_size": 0, 00:22:30.061 "rdma_cm_event_timeout_ms": 0, 00:22:30.061 "dhchap_digests": [ 00:22:30.061 "sha256", 00:22:30.061 "sha384", 00:22:30.061 "sha512" 00:22:30.061 ], 00:22:30.061 "dhchap_dhgroups": [ 00:22:30.061 "null", 00:22:30.061 "ffdhe2048", 00:22:30.061 "ffdhe3072", 00:22:30.061 "ffdhe4096", 00:22:30.061 "ffdhe6144", 00:22:30.061 "ffdhe8192" 00:22:30.061 ] 00:22:30.061 } 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "method": "bdev_nvme_set_hotplug", 00:22:30.061 "params": { 00:22:30.061 "period_us": 100000, 00:22:30.061 "enable": false 00:22:30.061 } 00:22:30.061 }, 00:22:30.061 { 00:22:30.061 "method": "bdev_malloc_create", 00:22:30.061 "params": { 00:22:30.061 "name": "malloc0", 00:22:30.061 "num_blocks": 8192, 00:22:30.061 "block_size": 4096, 00:22:30.061 "physical_block_size": 4096, 00:22:30.061 "uuid": "cd7a4f53-94a3-4662-99eb-a79098a3384a", 00:22:30.061 "optimal_io_boundary": 0, 00:22:30.061 "md_size": 0, 00:22:30.062 "dif_type": 0, 00:22:30.062 "dif_is_head_of_md": false, 00:22:30.062 "dif_pi_format": 0 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "bdev_wait_for_examine" 00:22:30.062 } 00:22:30.062 ] 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "subsystem": "nbd", 00:22:30.062 "config": [] 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "subsystem": "scheduler", 00:22:30.062 "config": [ 00:22:30.062 { 00:22:30.062 "method": "framework_set_scheduler", 00:22:30.062 "params": { 00:22:30.062 "name": "static" 00:22:30.062 } 00:22:30.062 } 00:22:30.062 ] 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "subsystem": "nvmf", 00:22:30.062 "config": [ 00:22:30.062 { 00:22:30.062 "method": "nvmf_set_config", 00:22:30.062 "params": { 00:22:30.062 "discovery_filter": "match_any", 00:22:30.062 "admin_cmd_passthru": { 00:22:30.062 "identify_ctrlr": false 00:22:30.062 } 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "nvmf_set_max_subsystems", 00:22:30.062 "params": { 00:22:30.062 "max_subsystems": 1024 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "nvmf_set_crdt", 00:22:30.062 "params": { 00:22:30.062 "crdt1": 0, 00:22:30.062 "crdt2": 0, 00:22:30.062 "crdt3": 0 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "nvmf_create_transport", 00:22:30.062 "params": { 00:22:30.062 "trtype": "TCP", 00:22:30.062 "max_queue_depth": 128, 00:22:30.062 "max_io_qpairs_per_ctrlr": 127, 00:22:30.062 "in_capsule_data_size": 4096, 00:22:30.062 "max_io_size": 131072, 00:22:30.062 "io_unit_size": 131072, 00:22:30.062 "max_aq_depth": 128, 00:22:30.062 "num_shared_buffers": 511, 00:22:30.062 "buf_cache_size": 4294967295, 00:22:30.062 "dif_insert_or_strip": false, 00:22:30.062 "zcopy": false, 00:22:30.062 "c2h_success": false, 00:22:30.062 "sock_priority": 0, 00:22:30.062 "abort_timeout_sec": 1, 00:22:30.062 "ack_timeout": 0, 00:22:30.062 "data_wr_pool_size": 0 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "nvmf_create_subsystem", 00:22:30.062 "params": { 00:22:30.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.062 "allow_any_host": false, 00:22:30.062 "serial_number": "00000000000000000000", 00:22:30.062 "model_number": "SPDK bdev Controller", 00:22:30.062 "max_namespaces": 32, 00:22:30.062 "min_cntlid": 1, 00:22:30.062 "max_cntlid": 65519, 00:22:30.062 "ana_reporting": false 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "nvmf_subsystem_add_host", 00:22:30.062 "params": { 00:22:30.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.062 "host": "nqn.2016-06.io.spdk:host1", 00:22:30.062 "psk": "key0" 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "nvmf_subsystem_add_ns", 00:22:30.062 "params": { 00:22:30.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.062 "namespace": { 00:22:30.062 "nsid": 1, 00:22:30.062 "bdev_name": "malloc0", 00:22:30.062 "nguid": "CD7A4F5394A3466299EBA79098A3384A", 00:22:30.062 "uuid": "cd7a4f53-94a3-4662-99eb-a79098a3384a", 00:22:30.062 "no_auto_visible": false 00:22:30.062 } 00:22:30.062 } 00:22:30.062 }, 00:22:30.062 { 00:22:30.062 "method": "nvmf_subsystem_add_listener", 00:22:30.062 "params": { 00:22:30.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.062 "listen_address": { 00:22:30.062 "trtype": "TCP", 00:22:30.062 "adrfam": "IPv4", 00:22:30.062 "traddr": "10.0.0.2", 00:22:30.062 "trsvcid": "4420" 00:22:30.062 }, 00:22:30.062 "secure_channel": false, 00:22:30.062 "sock_impl": "ssl" 00:22:30.062 } 00:22:30.062 } 00:22:30.062 ] 00:22:30.062 } 00:22:30.062 ] 00:22:30.062 }' 00:22:30.062 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:30.324 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:30.324 "subsystems": [ 00:22:30.324 { 00:22:30.324 "subsystem": "keyring", 00:22:30.324 "config": [ 00:22:30.324 { 00:22:30.324 "method": "keyring_file_add_key", 00:22:30.324 "params": { 00:22:30.324 "name": "key0", 00:22:30.324 "path": "/tmp/tmp.pqHjbzYrmH" 00:22:30.324 } 00:22:30.324 } 00:22:30.324 ] 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "subsystem": "iobuf", 00:22:30.324 "config": [ 00:22:30.324 { 00:22:30.324 "method": "iobuf_set_options", 00:22:30.324 "params": { 00:22:30.324 "small_pool_count": 8192, 00:22:30.324 "large_pool_count": 1024, 00:22:30.324 "small_bufsize": 8192, 00:22:30.324 "large_bufsize": 135168 00:22:30.324 } 00:22:30.324 } 00:22:30.324 ] 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "subsystem": "sock", 00:22:30.324 "config": [ 00:22:30.324 { 00:22:30.324 "method": "sock_set_default_impl", 00:22:30.324 "params": { 00:22:30.324 "impl_name": "posix" 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "sock_impl_set_options", 00:22:30.324 "params": { 00:22:30.324 "impl_name": "ssl", 00:22:30.324 "recv_buf_size": 4096, 00:22:30.324 "send_buf_size": 4096, 00:22:30.324 "enable_recv_pipe": true, 00:22:30.324 "enable_quickack": false, 00:22:30.324 "enable_placement_id": 0, 00:22:30.324 "enable_zerocopy_send_server": true, 00:22:30.324 "enable_zerocopy_send_client": false, 00:22:30.324 "zerocopy_threshold": 0, 00:22:30.324 "tls_version": 0, 00:22:30.324 "enable_ktls": false 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "sock_impl_set_options", 00:22:30.324 "params": { 00:22:30.324 "impl_name": "posix", 00:22:30.324 "recv_buf_size": 2097152, 00:22:30.324 "send_buf_size": 2097152, 00:22:30.324 "enable_recv_pipe": true, 00:22:30.324 "enable_quickack": false, 00:22:30.324 "enable_placement_id": 0, 00:22:30.324 "enable_zerocopy_send_server": true, 00:22:30.324 "enable_zerocopy_send_client": false, 00:22:30.324 "zerocopy_threshold": 0, 00:22:30.324 "tls_version": 0, 00:22:30.324 "enable_ktls": false 00:22:30.324 } 00:22:30.324 } 00:22:30.324 ] 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "subsystem": "vmd", 00:22:30.324 "config": [] 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "subsystem": "accel", 00:22:30.324 "config": [ 00:22:30.324 { 00:22:30.324 "method": "accel_set_options", 00:22:30.324 "params": { 00:22:30.324 "small_cache_size": 128, 00:22:30.324 "large_cache_size": 16, 00:22:30.324 "task_count": 2048, 00:22:30.324 "sequence_count": 2048, 00:22:30.324 "buf_count": 2048 00:22:30.324 } 00:22:30.324 } 00:22:30.324 ] 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "subsystem": "bdev", 00:22:30.324 "config": [ 00:22:30.324 { 00:22:30.324 "method": "bdev_set_options", 00:22:30.324 "params": { 00:22:30.324 "bdev_io_pool_size": 65535, 00:22:30.324 "bdev_io_cache_size": 256, 00:22:30.324 "bdev_auto_examine": true, 00:22:30.324 "iobuf_small_cache_size": 128, 00:22:30.324 "iobuf_large_cache_size": 16 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "bdev_raid_set_options", 00:22:30.324 "params": { 00:22:30.324 "process_window_size_kb": 1024, 00:22:30.324 "process_max_bandwidth_mb_sec": 0 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "bdev_iscsi_set_options", 00:22:30.324 "params": { 00:22:30.324 "timeout_sec": 30 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "bdev_nvme_set_options", 00:22:30.324 "params": { 00:22:30.324 "action_on_timeout": "none", 00:22:30.324 "timeout_us": 0, 00:22:30.324 "timeout_admin_us": 0, 00:22:30.324 "keep_alive_timeout_ms": 10000, 00:22:30.324 "arbitration_burst": 0, 00:22:30.324 "low_priority_weight": 0, 00:22:30.324 "medium_priority_weight": 0, 00:22:30.324 "high_priority_weight": 0, 00:22:30.324 "nvme_adminq_poll_period_us": 10000, 00:22:30.324 "nvme_ioq_poll_period_us": 0, 00:22:30.324 "io_queue_requests": 512, 00:22:30.324 "delay_cmd_submit": true, 00:22:30.324 "transport_retry_count": 4, 00:22:30.324 "bdev_retry_count": 3, 00:22:30.324 "transport_ack_timeout": 0, 00:22:30.324 "ctrlr_loss_timeout_sec": 0, 00:22:30.324 "reconnect_delay_sec": 0, 00:22:30.324 "fast_io_fail_timeout_sec": 0, 00:22:30.324 "disable_auto_failback": false, 00:22:30.324 "generate_uuids": false, 00:22:30.324 "transport_tos": 0, 00:22:30.324 "nvme_error_stat": false, 00:22:30.324 "rdma_srq_size": 0, 00:22:30.324 "io_path_stat": false, 00:22:30.324 "allow_accel_sequence": false, 00:22:30.324 "rdma_max_cq_size": 0, 00:22:30.324 "rdma_cm_event_timeout_ms": 0, 00:22:30.324 "dhchap_digests": [ 00:22:30.324 "sha256", 00:22:30.324 "sha384", 00:22:30.324 "sha512" 00:22:30.324 ], 00:22:30.324 "dhchap_dhgroups": [ 00:22:30.324 "null", 00:22:30.324 "ffdhe2048", 00:22:30.324 "ffdhe3072", 00:22:30.324 "ffdhe4096", 00:22:30.324 "ffdhe6144", 00:22:30.324 "ffdhe8192" 00:22:30.324 ] 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "bdev_nvme_attach_controller", 00:22:30.324 "params": { 00:22:30.324 "name": "nvme0", 00:22:30.324 "trtype": "TCP", 00:22:30.324 "adrfam": "IPv4", 00:22:30.324 "traddr": "10.0.0.2", 00:22:30.324 "trsvcid": "4420", 00:22:30.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.324 "prchk_reftag": false, 00:22:30.324 "prchk_guard": false, 00:22:30.324 "ctrlr_loss_timeout_sec": 0, 00:22:30.324 "reconnect_delay_sec": 0, 00:22:30.324 "fast_io_fail_timeout_sec": 0, 00:22:30.324 "psk": "key0", 00:22:30.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.324 "hdgst": false, 00:22:30.324 "ddgst": false 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "bdev_nvme_set_hotplug", 00:22:30.324 "params": { 00:22:30.324 "period_us": 100000, 00:22:30.324 "enable": false 00:22:30.324 } 00:22:30.324 }, 00:22:30.324 { 00:22:30.324 "method": "bdev_enable_histogram", 00:22:30.325 "params": { 00:22:30.325 "name": "nvme0n1", 00:22:30.325 "enable": true 00:22:30.325 } 00:22:30.325 }, 00:22:30.325 { 00:22:30.325 "method": "bdev_wait_for_examine" 00:22:30.325 } 00:22:30.325 ] 00:22:30.325 }, 00:22:30.325 { 00:22:30.325 "subsystem": "nbd", 00:22:30.325 "config": [] 00:22:30.325 } 00:22:30.325 ] 00:22:30.325 }' 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 139740 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 139740 ']' 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 139740 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139740 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139740' 00:22:30.325 killing process with pid 139740 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 139740 00:22:30.325 Received shutdown signal, test time was about 1.000000 seconds 00:22:30.325 00:22:30.325 Latency(us) 00:22:30.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.325 =================================================================================================================== 00:22:30.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 139740 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 139422 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 139422 ']' 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 139422 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.325 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 139422 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 139422' 00:22:30.587 killing process with pid 139422 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 139422 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 139422 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.587 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:30.587 "subsystems": [ 00:22:30.587 { 00:22:30.587 "subsystem": "keyring", 00:22:30.587 "config": [ 00:22:30.587 { 00:22:30.587 "method": "keyring_file_add_key", 00:22:30.587 "params": { 00:22:30.587 "name": "key0", 00:22:30.587 "path": "/tmp/tmp.pqHjbzYrmH" 00:22:30.587 } 00:22:30.587 } 00:22:30.587 ] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "iobuf", 00:22:30.587 "config": [ 00:22:30.587 { 00:22:30.587 "method": "iobuf_set_options", 00:22:30.587 "params": { 00:22:30.587 "small_pool_count": 8192, 00:22:30.587 "large_pool_count": 1024, 00:22:30.587 "small_bufsize": 8192, 00:22:30.587 "large_bufsize": 135168 00:22:30.587 } 00:22:30.587 } 00:22:30.587 ] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "sock", 00:22:30.587 "config": [ 00:22:30.587 { 00:22:30.587 "method": "sock_set_default_impl", 00:22:30.587 "params": { 00:22:30.587 "impl_name": "posix" 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "sock_impl_set_options", 00:22:30.587 "params": { 00:22:30.587 "impl_name": "ssl", 00:22:30.587 "recv_buf_size": 4096, 00:22:30.587 "send_buf_size": 4096, 00:22:30.587 "enable_recv_pipe": true, 00:22:30.587 "enable_quickack": false, 00:22:30.587 "enable_placement_id": 0, 00:22:30.587 "enable_zerocopy_send_server": true, 00:22:30.587 "enable_zerocopy_send_client": false, 00:22:30.587 "zerocopy_threshold": 0, 00:22:30.587 "tls_version": 0, 00:22:30.587 "enable_ktls": false 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "sock_impl_set_options", 00:22:30.587 "params": { 00:22:30.587 "impl_name": "posix", 00:22:30.587 "recv_buf_size": 2097152, 00:22:30.587 "send_buf_size": 2097152, 00:22:30.587 "enable_recv_pipe": true, 00:22:30.587 "enable_quickack": false, 00:22:30.587 "enable_placement_id": 0, 00:22:30.587 "enable_zerocopy_send_server": true, 00:22:30.587 "enable_zerocopy_send_client": false, 00:22:30.587 "zerocopy_threshold": 0, 00:22:30.587 "tls_version": 0, 00:22:30.587 "enable_ktls": false 00:22:30.587 } 00:22:30.587 } 00:22:30.587 ] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "vmd", 00:22:30.587 "config": [] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "accel", 00:22:30.587 "config": [ 00:22:30.587 { 00:22:30.587 "method": "accel_set_options", 00:22:30.587 "params": { 00:22:30.587 "small_cache_size": 128, 00:22:30.587 "large_cache_size": 16, 00:22:30.587 "task_count": 2048, 00:22:30.587 "sequence_count": 2048, 00:22:30.587 "buf_count": 2048 00:22:30.587 } 00:22:30.587 } 00:22:30.587 ] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "bdev", 00:22:30.587 "config": [ 00:22:30.587 { 00:22:30.587 "method": "bdev_set_options", 00:22:30.587 "params": { 00:22:30.587 "bdev_io_pool_size": 65535, 00:22:30.587 "bdev_io_cache_size": 256, 00:22:30.587 "bdev_auto_examine": true, 00:22:30.587 "iobuf_small_cache_size": 128, 00:22:30.587 "iobuf_large_cache_size": 16 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "bdev_raid_set_options", 00:22:30.587 "params": { 00:22:30.587 "process_window_size_kb": 1024, 00:22:30.587 "process_max_bandwidth_mb_sec": 0 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "bdev_iscsi_set_options", 00:22:30.587 "params": { 00:22:30.587 "timeout_sec": 30 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "bdev_nvme_set_options", 00:22:30.587 "params": { 00:22:30.587 "action_on_timeout": "none", 00:22:30.587 "timeout_us": 0, 00:22:30.587 "timeout_admin_us": 0, 00:22:30.587 "keep_alive_timeout_ms": 10000, 00:22:30.587 "arbitration_burst": 0, 00:22:30.587 "low_priority_weight": 0, 00:22:30.587 "medium_priority_weight": 0, 00:22:30.587 "high_priority_weight": 0, 00:22:30.587 "nvme_adminq_poll_period_us": 10000, 00:22:30.587 "nvme_ioq_poll_period_us": 0, 00:22:30.587 "io_queue_requests": 0, 00:22:30.587 "delay_cmd_submit": true, 00:22:30.587 "transport_retry_count": 4, 00:22:30.587 "bdev_retry_count": 3, 00:22:30.587 "transport_ack_timeout": 0, 00:22:30.587 "ctrlr_loss_timeout_sec": 0, 00:22:30.587 "reconnect_delay_sec": 0, 00:22:30.587 "fast_io_fail_timeout_sec": 0, 00:22:30.587 "disable_auto_failback": false, 00:22:30.587 "generate_uuids": false, 00:22:30.587 "transport_tos": 0, 00:22:30.587 "nvme_error_stat": false, 00:22:30.587 "rdma_srq_size": 0, 00:22:30.587 "io_path_stat": false, 00:22:30.587 "allow_accel_sequence": false, 00:22:30.587 "rdma_max_cq_size": 0, 00:22:30.587 "rdma_cm_event_timeout_ms": 0, 00:22:30.587 "dhchap_digests": [ 00:22:30.587 "sha256", 00:22:30.587 "sha384", 00:22:30.587 "sha512" 00:22:30.587 ], 00:22:30.587 "dhchap_dhgroups": [ 00:22:30.587 "null", 00:22:30.587 "ffdhe2048", 00:22:30.587 "ffdhe3072", 00:22:30.587 "ffdhe4096", 00:22:30.587 "ffdhe6144", 00:22:30.587 "ffdhe8192" 00:22:30.587 ] 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "bdev_nvme_set_hotplug", 00:22:30.587 "params": { 00:22:30.587 "period_us": 100000, 00:22:30.587 "enable": false 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "bdev_malloc_create", 00:22:30.587 "params": { 00:22:30.587 "name": "malloc0", 00:22:30.587 "num_blocks": 8192, 00:22:30.587 "block_size": 4096, 00:22:30.587 "physical_block_size": 4096, 00:22:30.587 "uuid": "cd7a4f53-94a3-4662-99eb-a79098a3384a", 00:22:30.587 "optimal_io_boundary": 0, 00:22:30.587 "md_size": 0, 00:22:30.587 "dif_type": 0, 00:22:30.587 "dif_is_head_of_md": false, 00:22:30.587 "dif_pi_format": 0 00:22:30.587 } 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "method": "bdev_wait_for_examine" 00:22:30.587 } 00:22:30.587 ] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "nbd", 00:22:30.587 "config": [] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "scheduler", 00:22:30.587 "config": [ 00:22:30.587 { 00:22:30.587 "method": "framework_set_scheduler", 00:22:30.587 "params": { 00:22:30.587 "name": "static" 00:22:30.587 } 00:22:30.587 } 00:22:30.587 ] 00:22:30.587 }, 00:22:30.587 { 00:22:30.587 "subsystem": "nvmf", 00:22:30.587 "config": [ 00:22:30.587 { 00:22:30.587 "method": "nvmf_set_config", 00:22:30.588 "params": { 00:22:30.588 "discovery_filter": "match_any", 00:22:30.588 "admin_cmd_passthru": { 00:22:30.588 "identify_ctrlr": false 00:22:30.588 } 00:22:30.588 } 00:22:30.588 }, 00:22:30.588 { 00:22:30.588 "method": "nvmf_set_max_subsystems", 00:22:30.588 "params": { 00:22:30.588 "max_subsystems": 1024 00:22:30.588 } 00:22:30.588 }, 00:22:30.588 { 00:22:30.588 "method": "nvmf_set_crdt", 00:22:30.588 "params": { 00:22:30.588 "crdt1": 0, 00:22:30.588 "crdt2": 0, 00:22:30.588 "crdt3": 0 00:22:30.588 } 00:22:30.588 }, 00:22:30.588 { 00:22:30.588 "method": "nvmf_create_transport", 00:22:30.588 "params": { 00:22:30.588 "trtype": "TCP", 00:22:30.588 "max_queue_depth": 128, 00:22:30.588 "max_io_qpairs_per_ctrlr": 127, 00:22:30.588 "in_capsule_data_size": 4096, 00:22:30.588 "max_io_size": 131072, 00:22:30.588 "io_unit_size": 131072, 00:22:30.588 "max_aq_depth": 128, 00:22:30.588 "num_shared_buffers": 511, 00:22:30.588 "buf_cache_size": 4294967295, 00:22:30.588 "dif_insert_or_strip": false, 00:22:30.588 "zcopy": false, 00:22:30.588 "c2h_success": false, 00:22:30.588 "sock_priority": 0, 00:22:30.588 "abort_timeout_sec": 1, 00:22:30.588 " 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.588 ack_timeout": 0, 00:22:30.588 "data_wr_pool_size": 0 00:22:30.588 } 00:22:30.588 }, 00:22:30.588 { 00:22:30.588 "method": "nvmf_create_subsystem", 00:22:30.588 "params": { 00:22:30.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.588 "allow_any_host": false, 00:22:30.588 "serial_number": "00000000000000000000", 00:22:30.588 "model_number": "SPDK bdev Controller", 00:22:30.588 "max_namespaces": 32, 00:22:30.588 "min_cntlid": 1, 00:22:30.588 "max_cntlid": 65519, 00:22:30.588 "ana_reporting": false 00:22:30.588 } 00:22:30.588 }, 00:22:30.588 { 00:22:30.588 "method": "nvmf_subsystem_add_host", 00:22:30.588 "params": { 00:22:30.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.588 "host": "nqn.2016-06.io.spdk:host1", 00:22:30.588 "psk": "key0" 00:22:30.588 } 00:22:30.588 }, 00:22:30.588 { 00:22:30.588 "method": "nvmf_subsystem_add_ns", 00:22:30.588 "params": { 00:22:30.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.588 "namespace": { 00:22:30.588 "nsid": 1, 00:22:30.588 "bdev_name": "malloc0", 00:22:30.588 "nguid": "CD7A4F5394A3466299EBA79098A3384A", 00:22:30.588 "uuid": "cd7a4f53-94a3-4662-99eb-a79098a3384a", 00:22:30.588 "no_auto_visible": false 00:22:30.588 } 00:22:30.588 } 00:22:30.588 }, 00:22:30.588 { 00:22:30.588 "method": "nvmf_subsystem_add_listener", 00:22:30.588 "params": { 00:22:30.588 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.588 "listen_address": { 00:22:30.588 "trtype": "TCP", 00:22:30.588 "adrfam": "IPv4", 00:22:30.588 "traddr": "10.0.0.2", 00:22:30.588 "trsvcid": "4420" 00:22:30.588 }, 00:22:30.588 "secure_channel": false, 00:22:30.588 "sock_impl": "ssl" 00:22:30.588 } 00:22:30.588 } 00:22:30.588 ] 00:22:30.588 } 00:22:30.588 ] 00:22:30.588 }' 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=140383 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 140383 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 140383 ']' 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.588 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.588 [2024-07-25 07:28:37.928163] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:30.588 [2024-07-25 07:28:37.928247] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.849 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.849 [2024-07-25 07:28:37.993252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.849 [2024-07-25 07:28:38.057450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.849 [2024-07-25 07:28:38.057485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.849 [2024-07-25 07:28:38.057492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.849 [2024-07-25 07:28:38.057499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.849 [2024-07-25 07:28:38.057505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.849 [2024-07-25 07:28:38.057554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.110 [2024-07-25 07:28:38.255159] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.110 [2024-07-25 07:28:38.294547] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:31.110 [2024-07-25 07:28:38.294774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=140454 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 140454 /var/tmp/bdevperf.sock 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 140454 ']' 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.371 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:31.371 "subsystems": [ 00:22:31.371 { 00:22:31.371 "subsystem": "keyring", 00:22:31.371 "config": [ 00:22:31.371 { 00:22:31.371 "method": "keyring_file_add_key", 00:22:31.371 "params": { 00:22:31.371 "name": "key0", 00:22:31.371 "path": "/tmp/tmp.pqHjbzYrmH" 00:22:31.371 } 00:22:31.371 } 00:22:31.371 ] 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "subsystem": "iobuf", 00:22:31.371 "config": [ 00:22:31.371 { 00:22:31.371 "method": "iobuf_set_options", 00:22:31.371 "params": { 00:22:31.371 "small_pool_count": 8192, 00:22:31.371 "large_pool_count": 1024, 00:22:31.371 "small_bufsize": 8192, 00:22:31.371 "large_bufsize": 135168 00:22:31.371 } 00:22:31.371 } 00:22:31.371 ] 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "subsystem": "sock", 00:22:31.371 "config": [ 00:22:31.371 { 00:22:31.371 "method": "sock_set_default_impl", 00:22:31.371 "params": { 00:22:31.371 "impl_name": "posix" 00:22:31.371 } 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "method": "sock_impl_set_options", 00:22:31.371 "params": { 00:22:31.371 "impl_name": "ssl", 00:22:31.371 "recv_buf_size": 4096, 00:22:31.371 "send_buf_size": 4096, 00:22:31.371 "enable_recv_pipe": true, 00:22:31.371 "enable_quickack": false, 00:22:31.371 "enable_placement_id": 0, 00:22:31.371 "enable_zerocopy_send_server": true, 00:22:31.371 "enable_zerocopy_send_client": false, 00:22:31.371 "zerocopy_threshold": 0, 00:22:31.371 "tls_version": 0, 00:22:31.371 "enable_ktls": false 00:22:31.371 } 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "method": "sock_impl_set_options", 00:22:31.371 "params": { 00:22:31.371 "impl_name": "posix", 00:22:31.371 "recv_buf_size": 2097152, 00:22:31.371 "send_buf_size": 2097152, 00:22:31.371 "enable_recv_pipe": true, 00:22:31.371 "enable_quickack": false, 00:22:31.371 "enable_placement_id": 0, 00:22:31.371 "enable_zerocopy_send_server": true, 00:22:31.371 "enable_zerocopy_send_client": false, 00:22:31.371 "zerocopy_threshold": 0, 00:22:31.371 "tls_version": 0, 00:22:31.371 "enable_ktls": false 00:22:31.371 } 00:22:31.371 } 00:22:31.371 ] 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "subsystem": "vmd", 00:22:31.371 "config": [] 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "subsystem": "accel", 00:22:31.371 "config": [ 00:22:31.371 { 00:22:31.371 "method": "accel_set_options", 00:22:31.371 "params": { 00:22:31.371 "small_cache_size": 128, 00:22:31.371 "large_cache_size": 16, 00:22:31.371 "task_count": 2048, 00:22:31.371 "sequence_count": 2048, 00:22:31.371 "buf_count": 2048 00:22:31.371 } 00:22:31.371 } 00:22:31.371 ] 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "subsystem": "bdev", 00:22:31.371 "config": [ 00:22:31.371 { 00:22:31.371 "method": "bdev_set_options", 00:22:31.371 "params": { 00:22:31.371 "bdev_io_pool_size": 65535, 00:22:31.371 "bdev_io_cache_size": 256, 00:22:31.371 "bdev_auto_examine": true, 00:22:31.371 "iobuf_small_cache_size": 128, 00:22:31.371 "iobuf_large_cache_size": 16 00:22:31.371 } 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "method": "bdev_raid_set_options", 00:22:31.371 "params": { 00:22:31.371 "process_window_size_kb": 1024, 00:22:31.371 "process_max_bandwidth_mb_sec": 0 00:22:31.371 } 00:22:31.371 }, 00:22:31.371 { 00:22:31.371 "method": "bdev_iscsi_set_options", 00:22:31.371 "params": { 00:22:31.371 "timeout_sec": 30 00:22:31.371 } 00:22:31.372 }, 00:22:31.372 { 00:22:31.372 "method": "bdev_nvme_set_options", 00:22:31.372 "params": { 00:22:31.372 "action_on_timeout": "none", 00:22:31.372 "timeout_us": 0, 00:22:31.372 "timeout_admin_us": 0, 00:22:31.372 "keep_alive_timeout_ms": 10000, 00:22:31.372 "arbitration_burst": 0, 00:22:31.372 "low_priority_weight": 0, 00:22:31.372 "medium_priority_weight": 0, 00:22:31.372 "high_priority_weight": 0, 00:22:31.372 "nvme_adminq_poll_period_us": 10000, 00:22:31.372 "nvme_ioq_poll_period_us": 0, 00:22:31.372 "io_queue_requests": 512, 00:22:31.372 "delay_cmd_submit": true, 00:22:31.372 "transport_retry_count": 4, 00:22:31.372 "bdev_retry_count": 3, 00:22:31.372 "transport_ack_timeout": 0, 00:22:31.372 "ctrlr_loss_timeout_sec": 0, 00:22:31.372 "reconnect_delay_sec": 0, 00:22:31.372 "fast_io_fail_timeout_sec": 0, 00:22:31.372 "disable_auto_failback": false, 00:22:31.372 "generate_uuids": false, 00:22:31.372 "transport_tos": 0, 00:22:31.372 "nvme_error_stat": false, 00:22:31.372 "rdma_srq_size": 0, 00:22:31.372 "io_path_stat": false, 00:22:31.372 "allow_accel_sequence": false, 00:22:31.372 "rdma_max_cq_size": 0, 00:22:31.372 "rdma_cm_event_timeout_ms": 0, 00:22:31.372 "dhchap_digests": [ 00:22:31.372 "sha256", 00:22:31.372 "sha384", 00:22:31.372 "sha512" 00:22:31.372 ], 00:22:31.372 "dhchap_dhgroups": [ 00:22:31.372 "null", 00:22:31.372 "ffdhe2048", 00:22:31.372 "ffdhe3072", 00:22:31.372 "ffdhe4096", 00:22:31.372 "ffdhe6144", 00:22:31.372 "ffdhe8192" 00:22:31.372 ] 00:22:31.372 } 00:22:31.372 }, 00:22:31.372 { 00:22:31.372 "method": "bdev_nvme_attach_controller", 00:22:31.372 "params": { 00:22:31.372 "name": "nvme0", 00:22:31.372 "trtype": "TCP", 00:22:31.372 "adrfam": "IPv4", 00:22:31.372 "traddr": "10.0.0.2", 00:22:31.372 "trsvcid": "4420", 00:22:31.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.372 "prchk_reftag": false, 00:22:31.372 "prchk_guard": false, 00:22:31.372 "ctrlr_loss_timeout_sec": 0, 00:22:31.372 "reconnect_delay_sec": 0, 00:22:31.372 "fast_io_fail_timeout_sec": 0, 00:22:31.372 "psk": "key0", 00:22:31.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.372 "hdgst": false, 00:22:31.372 "ddgst": false 00:22:31.372 } 00:22:31.372 }, 00:22:31.372 { 00:22:31.372 "method": "bdev_nvme_set_hotplug", 00:22:31.372 "params": { 00:22:31.372 "period_us": 100000, 00:22:31.372 "enable": false 00:22:31.372 } 00:22:31.372 }, 00:22:31.372 { 00:22:31.372 "method": "bdev_enable_histogram", 00:22:31.372 "params": { 00:22:31.372 "name": "nvme0n1", 00:22:31.372 "enable": true 00:22:31.372 } 00:22:31.372 }, 00:22:31.372 { 00:22:31.372 "method": "bdev_wait_for_examine" 00:22:31.372 } 00:22:31.372 ] 00:22:31.372 }, 00:22:31.372 { 00:22:31.372 "subsystem": "nbd", 00:22:31.372 "config": [] 00:22:31.372 } 00:22:31.372 ] 00:22:31.372 }' 00:22:31.633 [2024-07-25 07:28:38.781672] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:31.633 [2024-07-25 07:28:38.781725] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140454 ] 00:22:31.633 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.633 [2024-07-25 07:28:38.853877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.633 [2024-07-25 07:28:38.907572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.894 [2024-07-25 07:28:39.041112] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.467 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.467 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:32.467 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.467 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:32.467 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.467 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.467 Running I/O for 1 seconds... 00:22:33.852 00:22:33.852 Latency(us) 00:22:33.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.852 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:33.852 Verification LBA range: start 0x0 length 0x2000 00:22:33.852 nvme0n1 : 1.06 1702.65 6.65 0.00 0.00 73275.48 4942.51 138062.51 00:22:33.852 =================================================================================================================== 00:22:33.852 Total : 1702.65 6.65 0.00 0.00 73275.48 4942.51 138062.51 00:22:33.852 0 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:33.852 nvmf_trace.0 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 140454 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 140454 ']' 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 140454 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.852 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140454 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140454' 00:22:33.852 killing process with pid 140454 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 140454 00:22:33.852 Received shutdown signal, test time was about 1.000000 seconds 00:22:33.852 00:22:33.852 Latency(us) 00:22:33.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.852 =================================================================================================================== 00:22:33.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 140454 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.852 rmmod nvme_tcp 00:22:33.852 rmmod nvme_fabrics 00:22:33.852 rmmod nvme_keyring 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 140383 ']' 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 140383 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 140383 ']' 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 140383 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.852 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 140383 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 140383' 00:22:34.113 killing process with pid 140383 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 140383 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 140383 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.113 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1H5caf9KtC /tmp/tmp.h80ONmVVSM /tmp/tmp.pqHjbzYrmH 00:22:36.660 00:22:36.660 real 1m22.801s 00:22:36.660 user 2m4.041s 00:22:36.660 sys 0m29.702s 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.660 ************************************ 00:22:36.660 END TEST nvmf_tls 00:22:36.660 ************************************ 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:36.660 ************************************ 00:22:36.660 START TEST nvmf_fips 00:22:36.660 ************************************ 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:36.660 * Looking for test storage... 00:22:36.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:36.660 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:36.661 Error setting digest 00:22:36.661 00C249141C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:36.661 00C249141C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.661 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:44.844 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:44.844 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:44.844 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:44.844 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.844 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.845 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.845 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.845 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.845 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.845 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.845 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:22:44.845 00:22:44.845 --- 10.0.0.2 ping statistics --- 00:22:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.845 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:22:44.845 00:22:44.845 --- 10.0.0.1 ping statistics --- 00:22:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.845 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=145237 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 145237 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 145237 ']' 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.845 [2024-07-25 07:28:51.231378] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:44.845 [2024-07-25 07:28:51.231429] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.845 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.845 [2024-07-25 07:28:51.313921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.845 [2024-07-25 07:28:51.389389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.845 [2024-07-25 07:28:51.389443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.845 [2024-07-25 07:28:51.389450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.845 [2024-07-25 07:28:51.389458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.845 [2024-07-25 07:28:51.389464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.845 [2024-07-25 07:28:51.389495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.845 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:44.845 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.845 [2024-07-25 07:28:52.194337] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.845 [2024-07-25 07:28:52.210330] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.845 [2024-07-25 07:28:52.210653] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.106 [2024-07-25 07:28:52.240607] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.106 malloc0 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=145592 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 145592 /var/tmp/bdevperf.sock 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 145592 ']' 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.106 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:45.106 [2024-07-25 07:28:52.339645] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:22:45.106 [2024-07-25 07:28:52.339720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145592 ] 00:22:45.106 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.106 [2024-07-25 07:28:52.397009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.106 [2024-07-25 07:28:52.460095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.051 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.051 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:46.051 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:46.051 [2024-07-25 07:28:53.235613] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.051 [2024-07-25 07:28:53.235675] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:46.051 TLSTESTn1 00:22:46.051 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.312 Running I/O for 10 seconds... 00:22:56.313 00:22:56.314 Latency(us) 00:22:56.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.314 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:56.314 Verification LBA range: start 0x0 length 0x2000 00:22:56.314 TLSTESTn1 : 10.07 2071.13 8.09 0.00 0.00 61591.09 5051.73 138062.51 00:22:56.314 =================================================================================================================== 00:22:56.314 Total : 2071.13 8.09 0.00 0.00 61591.09 5051.73 138062.51 00:22:56.314 0 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:56.314 nvmf_trace.0 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 145592 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 145592 ']' 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 145592 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.314 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145592 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145592' 00:22:56.574 killing process with pid 145592 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 145592 00:22:56.574 Received shutdown signal, test time was about 10.000000 seconds 00:22:56.574 00:22:56.574 Latency(us) 00:22:56.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.574 =================================================================================================================== 00:22:56.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.574 [2024-07-25 07:29:03.720837] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 145592 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.574 rmmod nvme_tcp 00:22:56.574 rmmod nvme_fabrics 00:22:56.574 rmmod nvme_keyring 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 145237 ']' 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 145237 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 145237 ']' 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 145237 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.574 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 145237 00:22:56.835 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:56.835 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:56.835 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 145237' 00:22:56.835 killing process with pid 145237 00:22:56.835 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 145237 00:22:56.835 [2024-07-25 07:29:03.976274] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:56.835 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 145237 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.835 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:59.381 00:22:59.381 real 0m22.613s 00:22:59.381 user 0m22.923s 00:22:59.381 sys 0m10.383s 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:59.381 ************************************ 00:22:59.381 END TEST nvmf_fips 00:22:59.381 ************************************ 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.381 07:29:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:05.972 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:05.972 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:05.972 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:05.972 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.972 07:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:05.972 ************************************ 00:23:05.972 START TEST nvmf_perf_adq 00:23:05.972 ************************************ 00:23:05.972 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:05.972 * Looking for test storage... 00:23:05.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.972 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.972 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:05.972 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.972 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.972 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.973 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.114 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:14.115 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:14.115 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:14.115 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:14.115 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:14.115 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:14.115 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:16.029 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:21.428 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:21.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:21.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:21.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:21.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:21.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:23:21.429 00:23:21.429 --- 10.0.0.2 ping statistics --- 00:23:21.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.429 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:23:21.429 00:23:21.429 --- 10.0.0.1 ping statistics --- 00:23:21.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.429 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:21.429 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=157317 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 157317 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 157317 ']' 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.430 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:21.430 [2024-07-25 07:29:28.765512] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:23:21.430 [2024-07-25 07:29:28.765581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.692 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.692 [2024-07-25 07:29:28.838301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:21.692 [2024-07-25 07:29:28.915291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.692 [2024-07-25 07:29:28.915329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.692 [2024-07-25 07:29:28.915337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.692 [2024-07-25 07:29:28.915343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.692 [2024-07-25 07:29:28.915349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.692 [2024-07-25 07:29:28.915488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.692 [2024-07-25 07:29:28.915604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.692 [2024-07-25 07:29:28.915761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.692 [2024-07-25 07:29:28.915763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.263 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.524 [2024-07-25 07:29:29.738554] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.524 Malloc1 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.524 [2024-07-25 07:29:29.797888] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=157503 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:22.524 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:22.524 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:25.070 "tick_rate": 2400000000, 00:23:25.070 "poll_groups": [ 00:23:25.070 { 00:23:25.070 "name": "nvmf_tgt_poll_group_000", 00:23:25.070 "admin_qpairs": 1, 00:23:25.070 "io_qpairs": 1, 00:23:25.070 "current_admin_qpairs": 1, 00:23:25.070 "current_io_qpairs": 1, 00:23:25.070 "pending_bdev_io": 0, 00:23:25.070 "completed_nvme_io": 20221, 00:23:25.070 "transports": [ 00:23:25.070 { 00:23:25.070 "trtype": "TCP" 00:23:25.070 } 00:23:25.070 ] 00:23:25.070 }, 00:23:25.070 { 00:23:25.070 "name": "nvmf_tgt_poll_group_001", 00:23:25.070 "admin_qpairs": 0, 00:23:25.070 "io_qpairs": 1, 00:23:25.070 "current_admin_qpairs": 0, 00:23:25.070 "current_io_qpairs": 1, 00:23:25.070 "pending_bdev_io": 0, 00:23:25.070 "completed_nvme_io": 28362, 00:23:25.070 "transports": [ 00:23:25.070 { 00:23:25.070 "trtype": "TCP" 00:23:25.070 } 00:23:25.070 ] 00:23:25.070 }, 00:23:25.070 { 00:23:25.070 "name": "nvmf_tgt_poll_group_002", 00:23:25.070 "admin_qpairs": 0, 00:23:25.070 "io_qpairs": 1, 00:23:25.070 "current_admin_qpairs": 0, 00:23:25.070 "current_io_qpairs": 1, 00:23:25.070 "pending_bdev_io": 0, 00:23:25.070 "completed_nvme_io": 18197, 00:23:25.070 "transports": [ 00:23:25.070 { 00:23:25.070 "trtype": "TCP" 00:23:25.070 } 00:23:25.070 ] 00:23:25.070 }, 00:23:25.070 { 00:23:25.070 "name": "nvmf_tgt_poll_group_003", 00:23:25.070 "admin_qpairs": 0, 00:23:25.070 "io_qpairs": 1, 00:23:25.070 "current_admin_qpairs": 0, 00:23:25.070 "current_io_qpairs": 1, 00:23:25.070 "pending_bdev_io": 0, 00:23:25.070 "completed_nvme_io": 20819, 00:23:25.070 "transports": [ 00:23:25.070 { 00:23:25.070 "trtype": "TCP" 00:23:25.070 } 00:23:25.070 ] 00:23:25.070 } 00:23:25.070 ] 00:23:25.070 }' 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:25.070 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 157503 00:23:33.208 Initializing NVMe Controllers 00:23:33.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:33.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:33.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:33.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:33.209 Initialization complete. Launching workers. 00:23:33.209 ======================================================== 00:23:33.209 Latency(us) 00:23:33.209 Device Information : IOPS MiB/s Average min max 00:23:33.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13507.89 52.77 4738.19 1794.72 9445.51 00:23:33.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14476.49 56.55 4420.80 1564.32 8804.75 00:23:33.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13016.19 50.84 4917.21 1342.26 18875.86 00:23:33.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11867.79 46.36 5392.84 1071.85 12586.03 00:23:33.209 ======================================================== 00:23:33.209 Total : 52868.37 206.52 4842.31 1071.85 18875.86 00:23:33.209 00:23:33.209 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:33.209 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.209 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:33.209 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.209 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:33.209 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.209 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.209 rmmod nvme_tcp 00:23:33.209 rmmod nvme_fabrics 00:23:33.209 rmmod nvme_keyring 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 157317 ']' 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 157317 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 157317 ']' 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 157317 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 157317 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 157317' 00:23:33.209 killing process with pid 157317 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 157317 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 157317 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.209 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.124 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.124 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:35.124 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:37.037 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:38.950 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.250 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.250 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.250 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.251 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:23:44.251 00:23:44.251 --- 10.0.0.2 ping statistics --- 00:23:44.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.251 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:23:44.251 00:23:44.251 --- 10.0.0.1 ping statistics --- 00:23:44.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.251 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:44.251 net.core.busy_poll = 1 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:44.251 net.core.busy_read = 1 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=162205 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 162205 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 162205 ']' 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.251 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:44.566 [2024-07-25 07:29:51.651310] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:23:44.566 [2024-07-25 07:29:51.651381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.566 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.566 [2024-07-25 07:29:51.723983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.566 [2024-07-25 07:29:51.799199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.567 [2024-07-25 07:29:51.799243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.567 [2024-07-25 07:29:51.799250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.567 [2024-07-25 07:29:51.799256] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.567 [2024-07-25 07:29:51.799262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.567 [2024-07-25 07:29:51.799332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.567 [2024-07-25 07:29:51.799434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.567 [2024-07-25 07:29:51.799621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.567 [2024-07-25 07:29:51.799623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.142 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.402 [2024-07-25 07:29:52.601678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.402 Malloc1 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.402 [2024-07-25 07:29:52.661057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=162335 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:45.402 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:45.402 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.313 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:47.313 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.313 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.573 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.573 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:47.573 "tick_rate": 2400000000, 00:23:47.573 "poll_groups": [ 00:23:47.573 { 00:23:47.573 "name": "nvmf_tgt_poll_group_000", 00:23:47.573 "admin_qpairs": 1, 00:23:47.573 "io_qpairs": 1, 00:23:47.573 "current_admin_qpairs": 1, 00:23:47.573 "current_io_qpairs": 1, 00:23:47.573 "pending_bdev_io": 0, 00:23:47.573 "completed_nvme_io": 25062, 00:23:47.573 "transports": [ 00:23:47.573 { 00:23:47.573 "trtype": "TCP" 00:23:47.573 } 00:23:47.573 ] 00:23:47.573 }, 00:23:47.573 { 00:23:47.573 "name": "nvmf_tgt_poll_group_001", 00:23:47.573 "admin_qpairs": 0, 00:23:47.573 "io_qpairs": 3, 00:23:47.573 "current_admin_qpairs": 0, 00:23:47.573 "current_io_qpairs": 3, 00:23:47.573 "pending_bdev_io": 0, 00:23:47.573 "completed_nvme_io": 42677, 00:23:47.573 "transports": [ 00:23:47.573 { 00:23:47.573 "trtype": "TCP" 00:23:47.573 } 00:23:47.573 ] 00:23:47.573 }, 00:23:47.573 { 00:23:47.573 "name": "nvmf_tgt_poll_group_002", 00:23:47.573 "admin_qpairs": 0, 00:23:47.573 "io_qpairs": 0, 00:23:47.573 "current_admin_qpairs": 0, 00:23:47.573 "current_io_qpairs": 0, 00:23:47.573 "pending_bdev_io": 0, 00:23:47.573 "completed_nvme_io": 0, 00:23:47.573 "transports": [ 00:23:47.573 { 00:23:47.573 "trtype": "TCP" 00:23:47.573 } 00:23:47.573 ] 00:23:47.573 }, 00:23:47.573 { 00:23:47.573 "name": "nvmf_tgt_poll_group_003", 00:23:47.573 "admin_qpairs": 0, 00:23:47.573 "io_qpairs": 0, 00:23:47.573 "current_admin_qpairs": 0, 00:23:47.573 "current_io_qpairs": 0, 00:23:47.573 "pending_bdev_io": 0, 00:23:47.573 "completed_nvme_io": 0, 00:23:47.574 "transports": [ 00:23:47.574 { 00:23:47.574 "trtype": "TCP" 00:23:47.574 } 00:23:47.574 ] 00:23:47.574 } 00:23:47.574 ] 00:23:47.574 }' 00:23:47.574 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:47.574 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:47.574 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:47.574 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:47.574 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 162335 00:23:55.712 Initializing NVMe Controllers 00:23:55.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:55.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:55.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:55.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:55.712 Initialization complete. Launching workers. 00:23:55.712 ======================================================== 00:23:55.712 Latency(us) 00:23:55.712 Device Information : IOPS MiB/s Average min max 00:23:55.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 16277.50 63.58 3931.71 1246.74 7398.80 00:23:55.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7926.80 30.96 8074.98 1217.00 54021.88 00:23:55.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7912.60 30.91 8087.54 1620.82 53236.20 00:23:55.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6519.30 25.47 9838.34 1495.20 54530.22 00:23:55.712 ======================================================== 00:23:55.712 Total : 38636.20 150.92 6629.53 1217.00 54530.22 00:23:55.712 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.712 rmmod nvme_tcp 00:23:55.712 rmmod nvme_fabrics 00:23:55.712 rmmod nvme_keyring 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 162205 ']' 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 162205 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 162205 ']' 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 162205 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 162205 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 162205' 00:23:55.712 killing process with pid 162205 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 162205 00:23:55.712 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 162205 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.973 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:59.273 00:23:59.273 real 0m53.185s 00:23:59.273 user 2m44.303s 00:23:59.273 sys 0m12.901s 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:59.273 ************************************ 00:23:59.273 END TEST nvmf_perf_adq 00:23:59.273 ************************************ 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:59.273 ************************************ 00:23:59.273 START TEST nvmf_shutdown 00:23:59.273 ************************************ 00:23:59.273 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:59.273 * Looking for test storage... 00:23:59.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:59.274 ************************************ 00:23:59.274 START TEST nvmf_shutdown_tc1 00:23:59.274 ************************************ 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.274 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:05.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:05.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:05.862 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:05.862 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.862 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:24:06.124 00:24:06.124 --- 10.0.0.2 ping statistics --- 00:24:06.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.124 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:24:06.124 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:24:06.384 00:24:06.384 --- 10.0.0.1 ping statistics --- 00:24:06.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.384 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=169354 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 169354 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 169354 ']' 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.384 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:06.384 [2024-07-25 07:30:13.577011] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:06.384 [2024-07-25 07:30:13.577075] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.384 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.384 [2024-07-25 07:30:13.662256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.384 [2024-07-25 07:30:13.750248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.384 [2024-07-25 07:30:13.750307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.384 [2024-07-25 07:30:13.750315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.384 [2024-07-25 07:30:13.750322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.384 [2024-07-25 07:30:13.750328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.384 [2024-07-25 07:30:13.750462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.384 [2024-07-25 07:30:13.750631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.384 [2024-07-25 07:30:13.750799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.384 [2024-07-25 07:30:13.750800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 [2024-07-25 07:30:14.418328] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.326 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.326 Malloc1 00:24:07.326 [2024-07-25 07:30:14.521834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.326 Malloc2 00:24:07.326 Malloc3 00:24:07.326 Malloc4 00:24:07.326 Malloc5 00:24:07.327 Malloc6 00:24:07.589 Malloc7 00:24:07.589 Malloc8 00:24:07.589 Malloc9 00:24:07.589 Malloc10 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=169735 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 169735 /var/tmp/bdevperf.sock 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 169735 ']' 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.589 { 00:24:07.589 "params": { 00:24:07.589 "name": "Nvme$subsystem", 00:24:07.589 "trtype": "$TEST_TRANSPORT", 00:24:07.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.589 "adrfam": "ipv4", 00:24:07.589 "trsvcid": "$NVMF_PORT", 00:24:07.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.589 "hdgst": ${hdgst:-false}, 00:24:07.589 "ddgst": ${ddgst:-false} 00:24:07.589 }, 00:24:07.589 "method": "bdev_nvme_attach_controller" 00:24:07.589 } 00:24:07.589 EOF 00:24:07.589 )") 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.589 { 00:24:07.589 "params": { 00:24:07.589 "name": "Nvme$subsystem", 00:24:07.589 "trtype": "$TEST_TRANSPORT", 00:24:07.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.589 "adrfam": "ipv4", 00:24:07.589 "trsvcid": "$NVMF_PORT", 00:24:07.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.589 "hdgst": ${hdgst:-false}, 00:24:07.589 "ddgst": ${ddgst:-false} 00:24:07.589 }, 00:24:07.589 "method": "bdev_nvme_attach_controller" 00:24:07.589 } 00:24:07.589 EOF 00:24:07.589 )") 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.589 { 00:24:07.589 "params": { 00:24:07.589 "name": "Nvme$subsystem", 00:24:07.589 "trtype": "$TEST_TRANSPORT", 00:24:07.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.589 "adrfam": "ipv4", 00:24:07.589 "trsvcid": "$NVMF_PORT", 00:24:07.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.589 "hdgst": ${hdgst:-false}, 00:24:07.589 "ddgst": ${ddgst:-false} 00:24:07.589 }, 00:24:07.589 "method": "bdev_nvme_attach_controller" 00:24:07.589 } 00:24:07.589 EOF 00:24:07.589 )") 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.589 { 00:24:07.589 "params": { 00:24:07.589 "name": "Nvme$subsystem", 00:24:07.589 "trtype": "$TEST_TRANSPORT", 00:24:07.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.589 "adrfam": "ipv4", 00:24:07.589 "trsvcid": "$NVMF_PORT", 00:24:07.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.589 "hdgst": ${hdgst:-false}, 00:24:07.589 "ddgst": ${ddgst:-false} 00:24:07.589 }, 00:24:07.589 "method": "bdev_nvme_attach_controller" 00:24:07.589 } 00:24:07.589 EOF 00:24:07.589 )") 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.589 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.851 { 00:24:07.851 "params": { 00:24:07.851 "name": "Nvme$subsystem", 00:24:07.851 "trtype": "$TEST_TRANSPORT", 00:24:07.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.851 "adrfam": "ipv4", 00:24:07.851 "trsvcid": "$NVMF_PORT", 00:24:07.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.851 "hdgst": ${hdgst:-false}, 00:24:07.851 "ddgst": ${ddgst:-false} 00:24:07.851 }, 00:24:07.851 "method": "bdev_nvme_attach_controller" 00:24:07.851 } 00:24:07.851 EOF 00:24:07.851 )") 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.851 { 00:24:07.851 "params": { 00:24:07.851 "name": "Nvme$subsystem", 00:24:07.851 "trtype": "$TEST_TRANSPORT", 00:24:07.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.851 "adrfam": "ipv4", 00:24:07.851 "trsvcid": "$NVMF_PORT", 00:24:07.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.851 "hdgst": ${hdgst:-false}, 00:24:07.851 "ddgst": ${ddgst:-false} 00:24:07.851 }, 00:24:07.851 "method": "bdev_nvme_attach_controller" 00:24:07.851 } 00:24:07.851 EOF 00:24:07.851 )") 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.851 [2024-07-25 07:30:14.968394] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:07.851 [2024-07-25 07:30:14.968447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.851 { 00:24:07.851 "params": { 00:24:07.851 "name": "Nvme$subsystem", 00:24:07.851 "trtype": "$TEST_TRANSPORT", 00:24:07.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.851 "adrfam": "ipv4", 00:24:07.851 "trsvcid": "$NVMF_PORT", 00:24:07.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.851 "hdgst": ${hdgst:-false}, 00:24:07.851 "ddgst": ${ddgst:-false} 00:24:07.851 }, 00:24:07.851 "method": "bdev_nvme_attach_controller" 00:24:07.851 } 00:24:07.851 EOF 00:24:07.851 )") 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.851 { 00:24:07.851 "params": { 00:24:07.851 "name": "Nvme$subsystem", 00:24:07.851 "trtype": "$TEST_TRANSPORT", 00:24:07.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.851 "adrfam": "ipv4", 00:24:07.851 "trsvcid": "$NVMF_PORT", 00:24:07.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.851 "hdgst": ${hdgst:-false}, 00:24:07.851 "ddgst": ${ddgst:-false} 00:24:07.851 }, 00:24:07.851 "method": "bdev_nvme_attach_controller" 00:24:07.851 } 00:24:07.851 EOF 00:24:07.851 )") 00:24:07.851 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.852 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.852 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.852 { 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme$subsystem", 00:24:07.852 "trtype": "$TEST_TRANSPORT", 00:24:07.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "$NVMF_PORT", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.852 "hdgst": ${hdgst:-false}, 00:24:07.852 "ddgst": ${ddgst:-false} 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 } 00:24:07.852 EOF 00:24:07.852 )") 00:24:07.852 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.852 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:07.852 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.852 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:07.852 { 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme$subsystem", 00:24:07.852 "trtype": "$TEST_TRANSPORT", 00:24:07.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "$NVMF_PORT", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.852 "hdgst": ${hdgst:-false}, 00:24:07.852 "ddgst": ${ddgst:-false} 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 } 00:24:07.852 EOF 00:24:07.852 )") 00:24:07.852 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:07.852 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:07.852 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:07.852 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme1", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme2", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme3", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme4", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme5", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme6", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme7", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme8", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme9", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 },{ 00:24:07.852 "params": { 00:24:07.852 "name": "Nvme10", 00:24:07.852 "trtype": "tcp", 00:24:07.852 "traddr": "10.0.0.2", 00:24:07.852 "adrfam": "ipv4", 00:24:07.852 "trsvcid": "4420", 00:24:07.852 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:07.852 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:07.852 "hdgst": false, 00:24:07.852 "ddgst": false 00:24:07.852 }, 00:24:07.852 "method": "bdev_nvme_attach_controller" 00:24:07.852 }' 00:24:07.852 [2024-07-25 07:30:15.028720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.852 [2024-07-25 07:30:15.093399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 169735 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:09.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 169735 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:09.235 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 169354 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 [2024-07-25 07:30:17.456741] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:10.179 [2024-07-25 07:30:17.456794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170118 ] 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.179 "trsvcid": "$NVMF_PORT", 00:24:10.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.179 "hdgst": ${hdgst:-false}, 00:24:10.179 "ddgst": ${ddgst:-false} 00:24:10.179 }, 00:24:10.179 "method": "bdev_nvme_attach_controller" 00:24:10.179 } 00:24:10.179 EOF 00:24:10.179 )") 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.179 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.179 { 00:24:10.179 "params": { 00:24:10.179 "name": "Nvme$subsystem", 00:24:10.179 "trtype": "$TEST_TRANSPORT", 00:24:10.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.179 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "$NVMF_PORT", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.180 "hdgst": ${hdgst:-false}, 00:24:10.180 "ddgst": ${ddgst:-false} 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 } 00:24:10.180 EOF 00:24:10.180 )") 00:24:10.180 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.180 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:10.180 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:10.180 { 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme$subsystem", 00:24:10.180 "trtype": "$TEST_TRANSPORT", 00:24:10.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "$NVMF_PORT", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.180 "hdgst": ${hdgst:-false}, 00:24:10.180 "ddgst": ${ddgst:-false} 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 } 00:24:10.180 EOF 00:24:10.180 )") 00:24:10.180 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:10.180 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.180 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:10.180 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:10.180 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme1", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme2", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme3", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme4", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme5", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme6", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme7", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme8", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme9", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 },{ 00:24:10.180 "params": { 00:24:10.180 "name": "Nvme10", 00:24:10.180 "trtype": "tcp", 00:24:10.180 "traddr": "10.0.0.2", 00:24:10.180 "adrfam": "ipv4", 00:24:10.180 "trsvcid": "4420", 00:24:10.180 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:10.180 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:10.180 "hdgst": false, 00:24:10.180 "ddgst": false 00:24:10.180 }, 00:24:10.180 "method": "bdev_nvme_attach_controller" 00:24:10.180 }' 00:24:10.180 [2024-07-25 07:30:17.517528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.474 [2024-07-25 07:30:17.581786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.414 Running I/O for 1 seconds... 00:24:12.800 00:24:12.800 Latency(us) 00:24:12.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme1n1 : 1.11 173.60 10.85 0.00 0.00 364777.81 23920.64 304087.04 00:24:12.800 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme2n1 : 1.13 284.30 17.77 0.00 0.00 218656.60 20643.84 244667.73 00:24:12.800 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme3n1 : 1.07 254.04 15.88 0.00 0.00 234819.58 6580.91 237677.23 00:24:12.800 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme4n1 : 1.12 228.58 14.29 0.00 0.00 262542.51 22500.69 248162.99 00:24:12.800 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme5n1 : 1.13 227.25 14.20 0.00 0.00 259138.35 23483.73 248162.99 00:24:12.800 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme6n1 : 1.12 229.29 14.33 0.00 0.00 251983.15 23156.05 248162.99 00:24:12.800 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme7n1 : 1.17 219.29 13.71 0.00 0.00 259938.35 22500.69 274377.39 00:24:12.800 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme8n1 : 1.15 222.18 13.89 0.00 0.00 251303.04 24357.55 253405.87 00:24:12.800 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme9n1 : 1.16 276.41 17.28 0.00 0.00 198151.68 12724.91 258648.75 00:24:12.800 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:12.800 Verification LBA range: start 0x0 length 0x400 00:24:12.800 Nvme10n1 : 1.20 267.02 16.69 0.00 0.00 202499.75 14199.47 242920.11 00:24:12.800 =================================================================================================================== 00:24:12.800 Total : 2381.94 148.87 0.00 0.00 244457.46 6580.91 304087.04 00:24:12.800 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:12.800 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:12.800 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:12.800 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.800 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:12.800 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.801 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:12.801 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.801 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:12.801 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.801 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.801 rmmod nvme_tcp 00:24:13.062 rmmod nvme_fabrics 00:24:13.062 rmmod nvme_keyring 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 169354 ']' 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 169354 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 169354 ']' 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 169354 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 169354 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 169354' 00:24:13.062 killing process with pid 169354 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 169354 00:24:13.062 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 169354 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.323 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.238 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.238 00:24:15.238 real 0m16.201s 00:24:15.238 user 0m32.604s 00:24:15.238 sys 0m6.582s 00:24:15.238 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.238 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:15.238 ************************************ 00:24:15.238 END TEST nvmf_shutdown_tc1 00:24:15.238 ************************************ 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:15.501 ************************************ 00:24:15.501 START TEST nvmf_shutdown_tc2 00:24:15.501 ************************************ 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:15.501 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:15.501 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:15.501 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.501 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:15.502 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.502 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.763 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.763 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.763 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.763 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.763 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:24:15.764 00:24:15.764 --- 10.0.0.2 ping statistics --- 00:24:15.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.764 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:24:15.764 00:24:15.764 --- 10.0.0.1 ping statistics --- 00:24:15.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.764 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=171417 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 171417 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 171417 ']' 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.764 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.025 [2024-07-25 07:30:23.143187] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:16.025 [2024-07-25 07:30:23.143259] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.025 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.025 [2024-07-25 07:30:23.229423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.025 [2024-07-25 07:30:23.291709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.025 [2024-07-25 07:30:23.291744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.025 [2024-07-25 07:30:23.291750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.025 [2024-07-25 07:30:23.291755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.025 [2024-07-25 07:30:23.291759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.025 [2024-07-25 07:30:23.291876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.025 [2024-07-25 07:30:23.292039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.025 [2024-07-25 07:30:23.292195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.025 [2024-07-25 07:30:23.292197] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.597 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.859 [2024-07-25 07:30:23.965874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.859 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.859 Malloc1 00:24:16.859 [2024-07-25 07:30:24.064592] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.859 Malloc2 00:24:16.859 Malloc3 00:24:16.859 Malloc4 00:24:16.859 Malloc5 00:24:17.121 Malloc6 00:24:17.121 Malloc7 00:24:17.121 Malloc8 00:24:17.121 Malloc9 00:24:17.121 Malloc10 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=171632 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 171632 /var/tmp/bdevperf.sock 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 171632 ']' 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.121 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.121 { 00:24:17.121 "params": { 00:24:17.121 "name": "Nvme$subsystem", 00:24:17.121 "trtype": "$TEST_TRANSPORT", 00:24:17.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.121 "adrfam": "ipv4", 00:24:17.121 "trsvcid": "$NVMF_PORT", 00:24:17.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.121 "hdgst": ${hdgst:-false}, 00:24:17.121 "ddgst": ${ddgst:-false} 00:24:17.121 }, 00:24:17.121 "method": "bdev_nvme_attach_controller" 00:24:17.122 } 00:24:17.122 EOF 00:24:17.122 )") 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.122 { 00:24:17.122 "params": { 00:24:17.122 "name": "Nvme$subsystem", 00:24:17.122 "trtype": "$TEST_TRANSPORT", 00:24:17.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.122 "adrfam": "ipv4", 00:24:17.122 "trsvcid": "$NVMF_PORT", 00:24:17.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.122 "hdgst": ${hdgst:-false}, 00:24:17.122 "ddgst": ${ddgst:-false} 00:24:17.122 }, 00:24:17.122 "method": "bdev_nvme_attach_controller" 00:24:17.122 } 00:24:17.122 EOF 00:24:17.122 )") 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.122 { 00:24:17.122 "params": { 00:24:17.122 "name": "Nvme$subsystem", 00:24:17.122 "trtype": "$TEST_TRANSPORT", 00:24:17.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.122 "adrfam": "ipv4", 00:24:17.122 "trsvcid": "$NVMF_PORT", 00:24:17.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.122 "hdgst": ${hdgst:-false}, 00:24:17.122 "ddgst": ${ddgst:-false} 00:24:17.122 }, 00:24:17.122 "method": "bdev_nvme_attach_controller" 00:24:17.122 } 00:24:17.122 EOF 00:24:17.122 )") 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.122 { 00:24:17.122 "params": { 00:24:17.122 "name": "Nvme$subsystem", 00:24:17.122 "trtype": "$TEST_TRANSPORT", 00:24:17.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.122 "adrfam": "ipv4", 00:24:17.122 "trsvcid": "$NVMF_PORT", 00:24:17.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.122 "hdgst": ${hdgst:-false}, 00:24:17.122 "ddgst": ${ddgst:-false} 00:24:17.122 }, 00:24:17.122 "method": "bdev_nvme_attach_controller" 00:24:17.122 } 00:24:17.122 EOF 00:24:17.122 )") 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.122 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.122 { 00:24:17.122 "params": { 00:24:17.122 "name": "Nvme$subsystem", 00:24:17.122 "trtype": "$TEST_TRANSPORT", 00:24:17.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.122 "adrfam": "ipv4", 00:24:17.122 "trsvcid": "$NVMF_PORT", 00:24:17.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.122 "hdgst": ${hdgst:-false}, 00:24:17.122 "ddgst": ${ddgst:-false} 00:24:17.122 }, 00:24:17.122 "method": "bdev_nvme_attach_controller" 00:24:17.122 } 00:24:17.122 EOF 00:24:17.122 )") 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.384 { 00:24:17.384 "params": { 00:24:17.384 "name": "Nvme$subsystem", 00:24:17.384 "trtype": "$TEST_TRANSPORT", 00:24:17.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.384 "adrfam": "ipv4", 00:24:17.384 "trsvcid": "$NVMF_PORT", 00:24:17.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.384 "hdgst": ${hdgst:-false}, 00:24:17.384 "ddgst": ${ddgst:-false} 00:24:17.384 }, 00:24:17.384 "method": "bdev_nvme_attach_controller" 00:24:17.384 } 00:24:17.384 EOF 00:24:17.384 )") 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.384 [2024-07-25 07:30:24.502451] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:17.384 [2024-07-25 07:30:24.502504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171632 ] 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.384 { 00:24:17.384 "params": { 00:24:17.384 "name": "Nvme$subsystem", 00:24:17.384 "trtype": "$TEST_TRANSPORT", 00:24:17.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.384 "adrfam": "ipv4", 00:24:17.384 "trsvcid": "$NVMF_PORT", 00:24:17.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.384 "hdgst": ${hdgst:-false}, 00:24:17.384 "ddgst": ${ddgst:-false} 00:24:17.384 }, 00:24:17.384 "method": "bdev_nvme_attach_controller" 00:24:17.384 } 00:24:17.384 EOF 00:24:17.384 )") 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.384 { 00:24:17.384 "params": { 00:24:17.384 "name": "Nvme$subsystem", 00:24:17.384 "trtype": "$TEST_TRANSPORT", 00:24:17.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.384 "adrfam": "ipv4", 00:24:17.384 "trsvcid": "$NVMF_PORT", 00:24:17.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.384 "hdgst": ${hdgst:-false}, 00:24:17.384 "ddgst": ${ddgst:-false} 00:24:17.384 }, 00:24:17.384 "method": "bdev_nvme_attach_controller" 00:24:17.384 } 00:24:17.384 EOF 00:24:17.384 )") 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.384 { 00:24:17.384 "params": { 00:24:17.384 "name": "Nvme$subsystem", 00:24:17.384 "trtype": "$TEST_TRANSPORT", 00:24:17.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.384 "adrfam": "ipv4", 00:24:17.384 "trsvcid": "$NVMF_PORT", 00:24:17.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.384 "hdgst": ${hdgst:-false}, 00:24:17.384 "ddgst": ${ddgst:-false} 00:24:17.384 }, 00:24:17.384 "method": "bdev_nvme_attach_controller" 00:24:17.384 } 00:24:17.384 EOF 00:24:17.384 )") 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:17.384 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:17.384 { 00:24:17.384 "params": { 00:24:17.384 "name": "Nvme$subsystem", 00:24:17.384 "trtype": "$TEST_TRANSPORT", 00:24:17.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.384 "adrfam": "ipv4", 00:24:17.384 "trsvcid": "$NVMF_PORT", 00:24:17.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.385 "hdgst": ${hdgst:-false}, 00:24:17.385 "ddgst": ${ddgst:-false} 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 } 00:24:17.385 EOF 00:24:17.385 )") 00:24:17.385 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.385 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:17.385 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:17.385 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:17.385 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme1", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme2", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme3", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme4", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme5", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme6", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme7", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme8", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme9", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 },{ 00:24:17.385 "params": { 00:24:17.385 "name": "Nvme10", 00:24:17.385 "trtype": "tcp", 00:24:17.385 "traddr": "10.0.0.2", 00:24:17.385 "adrfam": "ipv4", 00:24:17.385 "trsvcid": "4420", 00:24:17.385 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:17.385 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:17.385 "hdgst": false, 00:24:17.385 "ddgst": false 00:24:17.385 }, 00:24:17.385 "method": "bdev_nvme_attach_controller" 00:24:17.385 }' 00:24:17.385 [2024-07-25 07:30:24.562372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.385 [2024-07-25 07:30:24.627417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.771 Running I/O for 10 seconds... 00:24:18.771 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.771 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:18.771 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:18.771 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.771 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.771 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.031 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.031 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:19.031 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:19.031 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:19.292 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 171632 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 171632 ']' 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 171632 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 171632 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 171632' 00:24:19.554 killing process with pid 171632 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 171632 00:24:19.554 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 171632 00:24:19.554 Received shutdown signal, test time was about 1.002402 seconds 00:24:19.554 00:24:19.554 Latency(us) 00:24:19.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.554 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme1n1 : 0.99 259.82 16.24 0.00 0.00 243531.52 41943.04 234181.97 00:24:19.554 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme2n1 : 0.99 323.15 20.20 0.00 0.00 191918.42 14745.60 217579.52 00:24:19.554 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme3n1 : 0.98 261.28 16.33 0.00 0.00 232690.77 23265.28 244667.73 00:24:19.554 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme4n1 : 0.96 199.56 12.47 0.00 0.00 297722.88 23374.51 260396.37 00:24:19.554 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme5n1 : 1.00 192.18 12.01 0.00 0.00 304143.64 23483.73 323310.93 00:24:19.554 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme6n1 : 1.00 256.95 16.06 0.00 0.00 222469.76 22391.47 249910.61 00:24:19.554 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme7n1 : 0.98 261.94 16.37 0.00 0.00 212949.55 21080.75 222822.40 00:24:19.554 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme8n1 : 0.97 198.66 12.42 0.00 0.00 274360.89 24029.87 255153.49 00:24:19.554 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme9n1 : 1.00 191.71 11.98 0.00 0.00 279697.64 25777.49 321563.31 00:24:19.554 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:19.554 Verification LBA range: start 0x0 length 0x400 00:24:19.554 Nvme10n1 : 0.96 199.78 12.49 0.00 0.00 259594.81 22828.37 255153.49 00:24:19.554 =================================================================================================================== 00:24:19.554 Total : 2345.05 146.57 0.00 0.00 245908.84 14745.60 323310.93 00:24:19.815 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 171417 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.757 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.757 rmmod nvme_tcp 00:24:20.757 rmmod nvme_fabrics 00:24:20.757 rmmod nvme_keyring 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 171417 ']' 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 171417 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 171417 ']' 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 171417 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 171417 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 171417' 00:24:21.019 killing process with pid 171417 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 171417 00:24:21.019 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 171417 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.280 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.196 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:23.196 00:24:23.196 real 0m7.845s 00:24:23.196 user 0m23.219s 00:24:23.196 sys 0m1.361s 00:24:23.196 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:23.196 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.196 ************************************ 00:24:23.196 END TEST nvmf_shutdown_tc2 00:24:23.196 ************************************ 00:24:23.196 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:23.196 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:23.196 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:23.196 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:23.458 ************************************ 00:24:23.458 START TEST nvmf_shutdown_tc3 00:24:23.458 ************************************ 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:23.458 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.458 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:23.459 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:23.459 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:23.459 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.459 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.720 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.720 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.720 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:24:23.720 00:24:23.720 --- 10.0.0.2 ping statistics --- 00:24:23.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.720 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:24:23.720 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:24:23.720 00:24:23.721 --- 10.0.0.1 ping statistics --- 00:24:23.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.721 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=173069 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 173069 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 173069 ']' 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.721 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.721 [2024-07-25 07:30:30.995160] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:23.721 [2024-07-25 07:30:30.995215] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.721 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.721 [2024-07-25 07:30:31.075618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.982 [2024-07-25 07:30:31.130942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.982 [2024-07-25 07:30:31.130973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.982 [2024-07-25 07:30:31.130978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.982 [2024-07-25 07:30:31.130983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.982 [2024-07-25 07:30:31.130987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.982 [2024-07-25 07:30:31.131092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.982 [2024-07-25 07:30:31.131249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.982 [2024-07-25 07:30:31.131550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.982 [2024-07-25 07:30:31.131551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.555 [2024-07-25 07:30:31.809836] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.555 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:24.555 Malloc1 00:24:24.555 [2024-07-25 07:30:31.908630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.816 Malloc2 00:24:24.816 Malloc3 00:24:24.816 Malloc4 00:24:24.816 Malloc5 00:24:24.816 Malloc6 00:24:24.816 Malloc7 00:24:24.816 Malloc8 00:24:25.078 Malloc9 00:24:25.078 Malloc10 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=173455 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 173455 /var/tmp/bdevperf.sock 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 173455 ']' 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.078 { 00:24:25.078 "params": { 00:24:25.078 "name": "Nvme$subsystem", 00:24:25.078 "trtype": "$TEST_TRANSPORT", 00:24:25.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.078 "adrfam": "ipv4", 00:24:25.078 "trsvcid": "$NVMF_PORT", 00:24:25.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.078 "hdgst": ${hdgst:-false}, 00:24:25.078 "ddgst": ${ddgst:-false} 00:24:25.078 }, 00:24:25.078 "method": "bdev_nvme_attach_controller" 00:24:25.078 } 00:24:25.078 EOF 00:24:25.078 )") 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.078 { 00:24:25.078 "params": { 00:24:25.078 "name": "Nvme$subsystem", 00:24:25.078 "trtype": "$TEST_TRANSPORT", 00:24:25.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.078 "adrfam": "ipv4", 00:24:25.078 "trsvcid": "$NVMF_PORT", 00:24:25.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.078 "hdgst": ${hdgst:-false}, 00:24:25.078 "ddgst": ${ddgst:-false} 00:24:25.078 }, 00:24:25.078 "method": "bdev_nvme_attach_controller" 00:24:25.078 } 00:24:25.078 EOF 00:24:25.078 )") 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.078 { 00:24:25.078 "params": { 00:24:25.078 "name": "Nvme$subsystem", 00:24:25.078 "trtype": "$TEST_TRANSPORT", 00:24:25.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.078 "adrfam": "ipv4", 00:24:25.078 "trsvcid": "$NVMF_PORT", 00:24:25.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.078 "hdgst": ${hdgst:-false}, 00:24:25.078 "ddgst": ${ddgst:-false} 00:24:25.078 }, 00:24:25.078 "method": "bdev_nvme_attach_controller" 00:24:25.078 } 00:24:25.078 EOF 00:24:25.078 )") 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.078 { 00:24:25.078 "params": { 00:24:25.078 "name": "Nvme$subsystem", 00:24:25.078 "trtype": "$TEST_TRANSPORT", 00:24:25.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.078 "adrfam": "ipv4", 00:24:25.078 "trsvcid": "$NVMF_PORT", 00:24:25.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.078 "hdgst": ${hdgst:-false}, 00:24:25.078 "ddgst": ${ddgst:-false} 00:24:25.078 }, 00:24:25.078 "method": "bdev_nvme_attach_controller" 00:24:25.078 } 00:24:25.078 EOF 00:24:25.078 )") 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.078 { 00:24:25.078 "params": { 00:24:25.078 "name": "Nvme$subsystem", 00:24:25.078 "trtype": "$TEST_TRANSPORT", 00:24:25.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.078 "adrfam": "ipv4", 00:24:25.078 "trsvcid": "$NVMF_PORT", 00:24:25.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.078 "hdgst": ${hdgst:-false}, 00:24:25.078 "ddgst": ${ddgst:-false} 00:24:25.078 }, 00:24:25.078 "method": "bdev_nvme_attach_controller" 00:24:25.078 } 00:24:25.078 EOF 00:24:25.078 )") 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.078 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.078 { 00:24:25.078 "params": { 00:24:25.078 "name": "Nvme$subsystem", 00:24:25.078 "trtype": "$TEST_TRANSPORT", 00:24:25.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "$NVMF_PORT", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.079 "hdgst": ${hdgst:-false}, 00:24:25.079 "ddgst": ${ddgst:-false} 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 } 00:24:25.079 EOF 00:24:25.079 )") 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.079 [2024-07-25 07:30:32.357893] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:25.079 [2024-07-25 07:30:32.357946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173455 ] 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.079 { 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme$subsystem", 00:24:25.079 "trtype": "$TEST_TRANSPORT", 00:24:25.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "$NVMF_PORT", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.079 "hdgst": ${hdgst:-false}, 00:24:25.079 "ddgst": ${ddgst:-false} 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 } 00:24:25.079 EOF 00:24:25.079 )") 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.079 { 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme$subsystem", 00:24:25.079 "trtype": "$TEST_TRANSPORT", 00:24:25.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "$NVMF_PORT", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.079 "hdgst": ${hdgst:-false}, 00:24:25.079 "ddgst": ${ddgst:-false} 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 } 00:24:25.079 EOF 00:24:25.079 )") 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.079 { 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme$subsystem", 00:24:25.079 "trtype": "$TEST_TRANSPORT", 00:24:25.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "$NVMF_PORT", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.079 "hdgst": ${hdgst:-false}, 00:24:25.079 "ddgst": ${ddgst:-false} 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 } 00:24:25.079 EOF 00:24:25.079 )") 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:25.079 { 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme$subsystem", 00:24:25.079 "trtype": "$TEST_TRANSPORT", 00:24:25.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "$NVMF_PORT", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.079 "hdgst": ${hdgst:-false}, 00:24:25.079 "ddgst": ${ddgst:-false} 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 } 00:24:25.079 EOF 00:24:25.079 )") 00:24:25.079 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:25.079 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme1", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme2", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme3", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme4", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme5", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme6", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme7", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme8", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme9", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 },{ 00:24:25.079 "params": { 00:24:25.079 "name": "Nvme10", 00:24:25.079 "trtype": "tcp", 00:24:25.079 "traddr": "10.0.0.2", 00:24:25.079 "adrfam": "ipv4", 00:24:25.079 "trsvcid": "4420", 00:24:25.079 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:25.079 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:25.079 "hdgst": false, 00:24:25.079 "ddgst": false 00:24:25.079 }, 00:24:25.079 "method": "bdev_nvme_attach_controller" 00:24:25.079 }' 00:24:25.080 [2024-07-25 07:30:32.417788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.341 [2024-07-25 07:30:32.482041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.724 Running I/O for 10 seconds... 00:24:26.724 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.724 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:26.724 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:26.724 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.724 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:26.724 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:27.017 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:27.017 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:27.017 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:27.017 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:27.017 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.017 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.278 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.278 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:27.278 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:27.278 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:27.278 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:27.278 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:27.554 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 173069 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 173069 ']' 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 173069 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 173069 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 173069' 00:24:27.555 killing process with pid 173069 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 173069 00:24:27.555 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 173069 00:24:27.555 [2024-07-25 07:30:34.749379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.749745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ae0 is same with the state(5) to be set 00:24:27.555 [2024-07-25 07:30:34.750524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b49e0 is same with the state(5) to be set 00:24:27.556 [2024-07-25 07:30:34.750554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b49e0 is same with the state(5) to be set 00:24:27.556 [2024-07-25 07:30:34.750782] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.556 [2024-07-25 07:30:34.751173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0fa0 is same with the state(5) to be set 00:24:27.556 [2024-07-25 07:30:34.751946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1460 is same with the state(5) to be set 00:24:27.556 [2024-07-25 07:30:34.752300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.752982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.556 [2024-07-25 07:30:34.752994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.556 [2024-07-25 07:30:34.753001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.753511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.753520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x254f470 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.753565] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x254f470 was disconnected and freed. reset controller. 00:24:27.557 [2024-07-25 07:30:34.755350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.755374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.755387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.755396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.755407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.755427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.755436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.755447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.755446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.755462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 [2024-07-25 07:30:34.755473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.557 [2024-07-25 07:30:34.755484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:1[2024-07-25 07:30:34.755492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.557 the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.557 [2024-07-25 07:30:34.755501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1[2024-07-25 07:30:34.755514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1[2024-07-25 07:30:34.755535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:1[2024-07-25 07:30:34.755562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 [2024-07-25 07:30:34.755568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 [2024-07-25 07:30:34.755588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with [2024-07-25 07:30:34.755593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:27.558 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 [2024-07-25 07:30:34.755610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1[2024-07-25 07:30:34.755625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 [2024-07-25 07:30:34.755646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 07:30:34.755651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 [2024-07-25 07:30:34.755662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:1[2024-07-25 07:30:34.755690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 [2024-07-25 07:30:34.755717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 [2024-07-25 07:30:34.755737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 07:30:34.755742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128[2024-07-25 07:30:34.755753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128[2024-07-25 07:30:34.755774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.558 the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.558 [2024-07-25 07:30:34.755786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.558 [2024-07-25 07:30:34.755792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d0f30 is same with the state(5) to be set 00:24:27.559 [2024-07-25 07:30:34.755801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.755985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.755992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.559 [2024-07-25 07:30:34.756326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.559 [2024-07-25 07:30:34.756336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with [2024-07-25 07:30:34.756491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:12the state(5) to be set 00:24:27.560 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 07:30:34.756520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with [2024-07-25 07:30:34.756537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:27.560 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with [2024-07-25 07:30:34.756554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:12the state(5) to be set 00:24:27.560 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756639] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x261f0f0 was disconnected and freed. reset controller. 00:24:27.560 [2024-07-25 07:30:34.756642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 [2024-07-25 07:30:34.756687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 [2024-07-25 07:30:34.756697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128[2024-07-25 07:30:34.756711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.560 the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 07:30:34.756724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.560 the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.560 [2024-07-25 07:30:34.756735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with [2024-07-25 07:30:34.756734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128the state(5) to be set 00:24:27.560 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128[2024-07-25 07:30:34.756757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 07:30:34.756808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d13f0 is same with the state(5) to be set 00:24:27.561 [2024-07-25 07:30:34.756818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.756987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.756995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.561 [2024-07-25 07:30:34.757313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.561 [2024-07-25 07:30:34.757320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.562 [2024-07-25 07:30:34.757453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.562 [2024-07-25 07:30:34.757508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.757812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d18d0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.758389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d1db0 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.758800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2270 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.758817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d2270 is same with the state(5) to be set 00:24:27.562 [2024-07-25 07:30:34.771198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.771580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.771589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249d9d0 is same with the state(5) to be set 00:24:27.563 [2024-07-25 07:30:34.771642] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x249d9d0 was disconnected and freed. reset controller. 00:24:27.563 [2024-07-25 07:30:34.772493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.563 [2024-07-25 07:30:34.772795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.563 [2024-07-25 07:30:34.772804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.772983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.772991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.564 [2024-07-25 07:30:34.773358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.564 [2024-07-25 07:30:34.773367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.565 [2024-07-25 07:30:34.773571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.773893] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x259a900 was disconnected and freed. reset controller. 00:24:27.565 [2024-07-25 07:30:34.773922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:27.565 [2024-07-25 07:30:34.773987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c8b10 (9): Bad file descriptor 00:24:27.565 [2024-07-25 07:30:34.774028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2587010 is same with the state(5) to be set 00:24:27.565 [2024-07-25 07:30:34.774123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff63a0 is same with the state(5) to be set 00:24:27.565 [2024-07-25 07:30:34.774215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c74e0 is same with the state(5) to be set 00:24:27.565 [2024-07-25 07:30:34.774299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4650 is same with the state(5) to be set 00:24:27.565 [2024-07-25 07:30:34.774380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b3320 is same with the state(5) to be set 00:24:27.565 [2024-07-25 07:30:34.774462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.565 [2024-07-25 07:30:34.774500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.565 [2024-07-25 07:30:34.774508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b270 is same with the state(5) to be set 00:24:27.566 [2024-07-25 07:30:34.774548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257d550 is same with the state(5) to be set 00:24:27.566 [2024-07-25 07:30:34.774632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b450 is same with the state(5) to be set 00:24:27.566 [2024-07-25 07:30:34.774717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.566 [2024-07-25 07:30:34.774772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.774779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2561190 is same with the state(5) to be set 00:24:27.566 [2024-07-25 07:30:34.777434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.566 [2024-07-25 07:30:34.777850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.566 [2024-07-25 07:30:34.777857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.777987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.777996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.567 [2024-07-25 07:30:34.778502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.567 [2024-07-25 07:30:34.778510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778562] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2f796d0 was disconnected and freed. reset controller. 00:24:27.568 [2024-07-25 07:30:34.778588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.778987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.778994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.779003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.568 [2024-07-25 07:30:34.779011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.568 [2024-07-25 07:30:34.779020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.779027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.779036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.779043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.779053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.779060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.779069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.779076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.779085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.783984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.569 [2024-07-25 07:30:34.783993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.569 [2024-07-25 07:30:34.784001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.570 [2024-07-25 07:30:34.784180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.570 [2024-07-25 07:30:34.784252] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2599420 was disconnected and freed. reset controller. 00:24:27.570 [2024-07-25 07:30:34.785468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:27.570 [2024-07-25 07:30:34.785488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:27.570 [2024-07-25 07:30:34.785507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c74e0 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b3320 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2587010 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff63a0 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a4650 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266b270 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257d550 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266b450 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.785673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2561190 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.788528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:27.570 [2024-07-25 07:30:34.789008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.570 [2024-07-25 07:30:34.789027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c8b10 with addr=10.0.0.2, port=4420 00:24:27.570 [2024-07-25 07:30:34.789037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c8b10 is same with the state(5) to be set 00:24:27.570 [2024-07-25 07:30:34.789719] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.570 [2024-07-25 07:30:34.790024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:27.570 [2024-07-25 07:30:34.790043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:27.570 [2024-07-25 07:30:34.790637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.570 [2024-07-25 07:30:34.790676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b3320 with addr=10.0.0.2, port=4420 00:24:27.570 [2024-07-25 07:30:34.790687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b3320 is same with the state(5) to be set 00:24:27.570 [2024-07-25 07:30:34.791187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.570 [2024-07-25 07:30:34.791199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c74e0 with addr=10.0.0.2, port=4420 00:24:27.570 [2024-07-25 07:30:34.791213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c74e0 is same with the state(5) to be set 00:24:27.570 [2024-07-25 07:30:34.791787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.570 [2024-07-25 07:30:34.791825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2561190 with addr=10.0.0.2, port=4420 00:24:27.570 [2024-07-25 07:30:34.791836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2561190 is same with the state(5) to be set 00:24:27.570 [2024-07-25 07:30:34.791851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c8b10 (9): Bad file descriptor 00:24:27.570 [2024-07-25 07:30:34.791959] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.570 [2024-07-25 07:30:34.792004] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:27.570 [2024-07-25 07:30:34.793073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.570 [2024-07-25 07:30:34.793090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266b450 with addr=10.0.0.2, port=4420 00:24:27.570 [2024-07-25 07:30:34.793098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b450 is same with the state(5) to be set 00:24:27.570 [2024-07-25 07:30:34.793671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.570 [2024-07-25 07:30:34.793709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266b270 with addr=10.0.0.2, port=4420 00:24:27.570 [2024-07-25 07:30:34.793720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b270 is same with the state(5) to be set 00:24:27.571 [2024-07-25 07:30:34.793735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b3320 (9): Bad file descriptor 00:24:27.571 [2024-07-25 07:30:34.793745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c74e0 (9): Bad file descriptor 00:24:27.571 [2024-07-25 07:30:34.793754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2561190 (9): Bad file descriptor 00:24:27.571 [2024-07-25 07:30:34.793762] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:27.571 [2024-07-25 07:30:34.793768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:27.571 [2024-07-25 07:30:34.793777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:27.571 [2024-07-25 07:30:34.793875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.571 [2024-07-25 07:30:34.793887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266b450 (9): Bad file descriptor 00:24:27.571 [2024-07-25 07:30:34.793897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266b270 (9): Bad file descriptor 00:24:27.571 [2024-07-25 07:30:34.793905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:27.571 [2024-07-25 07:30:34.793911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:27.571 [2024-07-25 07:30:34.793923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:27.571 [2024-07-25 07:30:34.793935] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:27.571 [2024-07-25 07:30:34.793941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:27.571 [2024-07-25 07:30:34.793948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:27.571 [2024-07-25 07:30:34.793958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:27.571 [2024-07-25 07:30:34.793965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:27.571 [2024-07-25 07:30:34.793971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:27.571 [2024-07-25 07:30:34.794014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.571 [2024-07-25 07:30:34.794021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.571 [2024-07-25 07:30:34.794027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.571 [2024-07-25 07:30:34.794033] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:27.571 [2024-07-25 07:30:34.794040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:27.571 [2024-07-25 07:30:34.794046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:27.571 [2024-07-25 07:30:34.794057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:27.571 [2024-07-25 07:30:34.794063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:27.571 [2024-07-25 07:30:34.794070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:27.571 [2024-07-25 07:30:34.794101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.571 [2024-07-25 07:30:34.794109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.571 [2024-07-25 07:30:34.795589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.571 [2024-07-25 07:30:34.795828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.571 [2024-07-25 07:30:34.795837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.795986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.795993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.572 [2024-07-25 07:30:34.796270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.572 [2024-07-25 07:30:34.796278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.796661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.796669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20914b0 is same with the state(5) to be set 00:24:27.573 [2024-07-25 07:30:34.797961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.797976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.797989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.573 [2024-07-25 07:30:34.797998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.573 [2024-07-25 07:30:34.798009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.574 [2024-07-25 07:30:34.798468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.574 [2024-07-25 07:30:34.798476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.575 [2024-07-25 07:30:34.798816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.575 [2024-07-25 07:30:34.798823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.798989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.798999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.799006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.799015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.799022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.799032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.799039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.799047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249eec0 is same with the state(5) to be set 00:24:27.576 [2024-07-25 07:30:34.800333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.576 [2024-07-25 07:30:34.800605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.576 [2024-07-25 07:30:34.800614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.800985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.800995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-25 07:30:34.801180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.577 [2024-07-25 07:30:34.801188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.801407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.801415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a03b0 is same with the state(5) to be set 00:24:27.578 [2024-07-25 07:30:34.802692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.802990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.802997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.803013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.803029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.803046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.803062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.803078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.803095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.578 [2024-07-25 07:30:34.803111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-25 07:30:34.803124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.579 [2024-07-25 07:30:34.803765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.579 [2024-07-25 07:30:34.803772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.580 [2024-07-25 07:30:34.803780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2dd1c90 is same with the state(5) to be set 00:24:27.580 [2024-07-25 07:30:34.806014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.580 [2024-07-25 07:30:34.806049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:27.580 [2024-07-25 07:30:34.806063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:27.580 task offset: 29184 on job bdev=Nvme2n1 fails 00:24:27.580 00:24:27.580 Latency(us) 00:24:27.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.580 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme1n1 ended in about 0.96 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme1n1 : 0.96 199.57 12.47 66.52 0.00 237822.51 26760.53 235929.60 00:24:27.580 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme2n1 ended in about 0.92 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme2n1 : 0.92 208.81 13.05 69.60 0.00 222338.64 2321.07 248162.99 00:24:27.580 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme3n1 ended in about 0.94 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme3n1 : 0.94 136.13 8.51 68.07 0.00 296967.40 21408.43 283115.52 00:24:27.580 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme4n1 ended in about 0.94 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme4n1 : 0.94 135.96 8.50 67.98 0.00 290889.96 22500.69 293601.28 00:24:27.580 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme5n1 ended in about 0.96 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme5n1 : 0.96 265.44 16.59 66.36 0.00 175087.87 13598.72 216705.71 00:24:27.580 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme6n1 ended in about 0.97 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme6n1 : 0.97 132.40 8.27 66.20 0.00 286445.80 23920.64 300591.79 00:24:27.580 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme7n1 ended in about 0.97 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme7n1 : 0.97 132.08 8.25 66.04 0.00 280795.59 25340.59 262144.00 00:24:27.580 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme8n1 ended in about 0.95 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme8n1 : 0.95 134.52 8.41 67.26 0.00 268443.02 14199.47 335544.32 00:24:27.580 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme9n1 ended in about 0.95 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme9n1 : 0.95 134.36 8.40 67.18 0.00 262328.60 15073.28 291853.65 00:24:27.580 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:27.580 Job: Nvme10n1 ended in about 0.95 seconds with error 00:24:27.580 Verification LBA range: start 0x0 length 0x400 00:24:27.580 Nvme10n1 : 0.95 134.79 8.42 67.39 0.00 254946.42 30365.01 297096.53 00:24:27.580 =================================================================================================================== 00:24:27.580 Total : 1614.06 100.88 672.60 0.00 251133.36 2321.07 335544.32 00:24:27.580 [2024-07-25 07:30:34.832677] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:27.580 [2024-07-25 07:30:34.832721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:27.580 [2024-07-25 07:30:34.833694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.580 [2024-07-25 07:30:34.833718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a4650 with addr=10.0.0.2, port=4420 00:24:27.580 [2024-07-25 07:30:34.833728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a4650 is same with the state(5) to be set 00:24:27.580 [2024-07-25 07:30:34.833944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.580 [2024-07-25 07:30:34.833955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2587010 with addr=10.0.0.2, port=4420 00:24:27.580 [2024-07-25 07:30:34.833962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2587010 is same with the state(5) to be set 00:24:27.580 [2024-07-25 07:30:34.834174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.580 [2024-07-25 07:30:34.834236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff63a0 with addr=10.0.0.2, port=4420 00:24:27.580 [2024-07-25 07:30:34.834243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff63a0 is same with the state(5) to be set 00:24:27.580 [2024-07-25 07:30:34.834702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.580 [2024-07-25 07:30:34.834712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257d550 with addr=10.0.0.2, port=4420 00:24:27.580 [2024-07-25 07:30:34.834719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257d550 is same with the state(5) to be set 00:24:27.580 [2024-07-25 07:30:34.834736] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:27.580 [2024-07-25 07:30:34.834748] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:27.580 [2024-07-25 07:30:34.834759] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:27.580 [2024-07-25 07:30:34.834770] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:27.580 [2024-07-25 07:30:34.834780] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:27.580 [2024-07-25 07:30:34.834790] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:27.580 [2024-07-25 07:30:34.835869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:27.580 [2024-07-25 07:30:34.835882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:27.580 [2024-07-25 07:30:34.835890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:27.580 [2024-07-25 07:30:34.835898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:27.580 [2024-07-25 07:30:34.835907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:27.580 [2024-07-25 07:30:34.835915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:27.580 [2024-07-25 07:30:34.835976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a4650 (9): Bad file descriptor 00:24:27.580 [2024-07-25 07:30:34.835989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2587010 (9): Bad file descriptor 00:24:27.580 [2024-07-25 07:30:34.836004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff63a0 (9): Bad file descriptor 00:24:27.580 [2024-07-25 07:30:34.836013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257d550 (9): Bad file descriptor 00:24:27.580 [2024-07-25 07:30:34.836509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.580 [2024-07-25 07:30:34.836523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c8b10 with addr=10.0.0.2, port=4420 00:24:27.580 [2024-07-25 07:30:34.836530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c8b10 is same with the state(5) to be set 00:24:27.580 [2024-07-25 07:30:34.836877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.580 [2024-07-25 07:30:34.836889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2561190 with addr=10.0.0.2, port=4420 00:24:27.580 [2024-07-25 07:30:34.836896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2561190 is same with the state(5) to be set 00:24:27.580 [2024-07-25 07:30:34.837355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.580 [2024-07-25 07:30:34.837365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c74e0 with addr=10.0.0.2, port=4420 00:24:27.580 [2024-07-25 07:30:34.837372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c74e0 is same with the state(5) to be set 00:24:27.581 [2024-07-25 07:30:34.837626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.581 [2024-07-25 07:30:34.837635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b3320 with addr=10.0.0.2, port=4420 00:24:27.581 [2024-07-25 07:30:34.837642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b3320 is same with the state(5) to be set 00:24:27.581 [2024-07-25 07:30:34.838076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.581 [2024-07-25 07:30:34.838085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266b270 with addr=10.0.0.2, port=4420 00:24:27.581 [2024-07-25 07:30:34.838092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b270 is same with the state(5) to be set 00:24:27.581 [2024-07-25 07:30:34.838515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.581 [2024-07-25 07:30:34.838524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x266b450 with addr=10.0.0.2, port=4420 00:24:27.581 [2024-07-25 07:30:34.838531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266b450 is same with the state(5) to be set 00:24:27.581 [2024-07-25 07:30:34.838539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c8b10 (9): Bad file descriptor 00:24:27.581 [2024-07-25 07:30:34.838723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2561190 (9): Bad file descriptor 00:24:27.581 [2024-07-25 07:30:34.838732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c74e0 (9): Bad file descriptor 00:24:27.581 [2024-07-25 07:30:34.838741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b3320 (9): Bad file descriptor 00:24:27.581 [2024-07-25 07:30:34.838750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266b270 (9): Bad file descriptor 00:24:27.581 [2024-07-25 07:30:34.838759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266b450 (9): Bad file descriptor 00:24:27.581 [2024-07-25 07:30:34.838795] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838909] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:27.581 [2024-07-25 07:30:34.838915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:27.581 [2024-07-25 07:30:34.838922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:27.581 [2024-07-25 07:30:34.838952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.581 [2024-07-25 07:30:34.838983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.842 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:27.842 07:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 173455 00:24:28.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (173455) - No such process 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.785 rmmod nvme_tcp 00:24:28.785 rmmod nvme_fabrics 00:24:28.785 rmmod nvme_keyring 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.785 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.332 00:24:31.332 real 0m7.555s 00:24:31.332 user 0m18.152s 00:24:31.332 sys 0m1.207s 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.332 ************************************ 00:24:31.332 END TEST nvmf_shutdown_tc3 00:24:31.332 ************************************ 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:31.332 00:24:31.332 real 0m31.923s 00:24:31.332 user 1m14.095s 00:24:31.332 sys 0m9.368s 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:31.332 ************************************ 00:24:31.332 END TEST nvmf_shutdown 00:24:31.332 ************************************ 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:24:31.332 00:24:31.332 real 11m29.020s 00:24:31.332 user 24m26.475s 00:24:31.332 sys 3m26.531s 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.332 07:30:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.332 ************************************ 00:24:31.332 END TEST nvmf_target_extra 00:24:31.332 ************************************ 00:24:31.332 07:30:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:31.332 07:30:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.332 07:30:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.332 07:30:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.332 ************************************ 00:24:31.332 START TEST nvmf_host 00:24:31.332 ************************************ 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:31.332 * Looking for test storage... 00:24:31.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.332 07:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.333 ************************************ 00:24:31.333 START TEST nvmf_multicontroller 00:24:31.333 ************************************ 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:31.333 * Looking for test storage... 00:24:31.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.333 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.334 07:30:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.485 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.485 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.485 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:24:39.485 00:24:39.485 --- 10.0.0.2 ping statistics --- 00:24:39.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.486 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:24:39.486 00:24:39.486 --- 10.0.0.1 ping statistics --- 00:24:39.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.486 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=178243 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 178243 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 178243 ']' 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.486 07:30:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 [2024-07-25 07:30:45.844520] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:39.486 [2024-07-25 07:30:45.844582] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.486 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.486 [2024-07-25 07:30:45.931463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:39.486 [2024-07-25 07:30:46.018557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.486 [2024-07-25 07:30:46.018610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.486 [2024-07-25 07:30:46.018617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.486 [2024-07-25 07:30:46.018625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.486 [2024-07-25 07:30:46.018633] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.486 [2024-07-25 07:30:46.018691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.486 [2024-07-25 07:30:46.018821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.486 [2024-07-25 07:30:46.018822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 [2024-07-25 07:30:46.661739] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 Malloc0 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 [2024-07-25 07:30:46.734345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 [2024-07-25 07:30:46.746298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 Malloc1 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=178571 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 178571 /var/tmp/bdevperf.sock 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 178571 ']' 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.486 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.487 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.487 07:30:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.430 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.430 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:40.430 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:40.430 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.430 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.692 NVMe0n1 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.692 1 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.692 request: 00:24:40.692 { 00:24:40.692 "name": "NVMe0", 00:24:40.692 "trtype": "tcp", 00:24:40.692 "traddr": "10.0.0.2", 00:24:40.692 "adrfam": "ipv4", 00:24:40.692 "trsvcid": "4420", 00:24:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.692 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:40.692 "hostaddr": "10.0.0.2", 00:24:40.692 "hostsvcid": "60000", 00:24:40.692 "prchk_reftag": false, 00:24:40.692 "prchk_guard": false, 00:24:40.692 "hdgst": false, 00:24:40.692 "ddgst": false, 00:24:40.692 "method": "bdev_nvme_attach_controller", 00:24:40.692 "req_id": 1 00:24:40.692 } 00:24:40.692 Got JSON-RPC error response 00:24:40.692 response: 00:24:40.692 { 00:24:40.692 "code": -114, 00:24:40.692 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:40.692 } 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.692 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.692 request: 00:24:40.692 { 00:24:40.692 "name": "NVMe0", 00:24:40.692 "trtype": "tcp", 00:24:40.692 "traddr": "10.0.0.2", 00:24:40.692 "adrfam": "ipv4", 00:24:40.692 "trsvcid": "4420", 00:24:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:40.692 "hostaddr": "10.0.0.2", 00:24:40.692 "hostsvcid": "60000", 00:24:40.692 "prchk_reftag": false, 00:24:40.692 "prchk_guard": false, 00:24:40.693 "hdgst": false, 00:24:40.693 "ddgst": false, 00:24:40.693 "method": "bdev_nvme_attach_controller", 00:24:40.693 "req_id": 1 00:24:40.693 } 00:24:40.693 Got JSON-RPC error response 00:24:40.693 response: 00:24:40.693 { 00:24:40.693 "code": -114, 00:24:40.693 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:40.693 } 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.693 request: 00:24:40.693 { 00:24:40.693 "name": "NVMe0", 00:24:40.693 "trtype": "tcp", 00:24:40.693 "traddr": "10.0.0.2", 00:24:40.693 "adrfam": "ipv4", 00:24:40.693 "trsvcid": "4420", 00:24:40.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.693 "hostaddr": "10.0.0.2", 00:24:40.693 "hostsvcid": "60000", 00:24:40.693 "prchk_reftag": false, 00:24:40.693 "prchk_guard": false, 00:24:40.693 "hdgst": false, 00:24:40.693 "ddgst": false, 00:24:40.693 "multipath": "disable", 00:24:40.693 "method": "bdev_nvme_attach_controller", 00:24:40.693 "req_id": 1 00:24:40.693 } 00:24:40.693 Got JSON-RPC error response 00:24:40.693 response: 00:24:40.693 { 00:24:40.693 "code": -114, 00:24:40.693 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:40.693 } 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.693 request: 00:24:40.693 { 00:24:40.693 "name": "NVMe0", 00:24:40.693 "trtype": "tcp", 00:24:40.693 "traddr": "10.0.0.2", 00:24:40.693 "adrfam": "ipv4", 00:24:40.693 "trsvcid": "4420", 00:24:40.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.693 "hostaddr": "10.0.0.2", 00:24:40.693 "hostsvcid": "60000", 00:24:40.693 "prchk_reftag": false, 00:24:40.693 "prchk_guard": false, 00:24:40.693 "hdgst": false, 00:24:40.693 "ddgst": false, 00:24:40.693 "multipath": "failover", 00:24:40.693 "method": "bdev_nvme_attach_controller", 00:24:40.693 "req_id": 1 00:24:40.693 } 00:24:40.693 Got JSON-RPC error response 00:24:40.693 response: 00:24:40.693 { 00:24:40.693 "code": -114, 00:24:40.693 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:40.693 } 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.693 07:30:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.693 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.693 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.955 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:40.955 07:30:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.342 0 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 178571 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 178571 ']' 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 178571 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 178571 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 178571' 00:24:42.342 killing process with pid 178571 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 178571 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 178571 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:42.342 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:42.342 [2024-07-25 07:30:46.861539] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:42.342 [2024-07-25 07:30:46.861597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178571 ] 00:24:42.342 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.342 [2024-07-25 07:30:46.920122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.342 [2024-07-25 07:30:46.984758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.342 [2024-07-25 07:30:48.209060] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name eaf3afb1-087d-40c0-b11a-109a959b9a4e already exists 00:24:42.342 [2024-07-25 07:30:48.209091] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:eaf3afb1-087d-40c0-b11a-109a959b9a4e alias for bdev NVMe1n1 00:24:42.342 [2024-07-25 07:30:48.209099] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:42.342 Running I/O for 1 seconds... 00:24:42.342 00:24:42.342 Latency(us) 00:24:42.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.342 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:42.342 NVMe0n1 : 1.00 29244.71 114.24 0.00 0.00 4366.11 3877.55 13380.27 00:24:42.342 =================================================================================================================== 00:24:42.342 Total : 29244.71 114.24 0.00 0.00 4366.11 3877.55 13380.27 00:24:42.342 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.342 00:24:42.342 Latency(us) 00:24:42.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.342 =================================================================================================================== 00:24:42.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.342 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.342 rmmod nvme_tcp 00:24:42.342 rmmod nvme_fabrics 00:24:42.342 rmmod nvme_keyring 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 178243 ']' 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 178243 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 178243 ']' 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 178243 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.342 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 178243 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 178243' 00:24:42.604 killing process with pid 178243 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 178243 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 178243 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.604 07:30:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.153 07:30:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.153 00:24:45.153 real 0m13.480s 00:24:45.153 user 0m16.665s 00:24:45.153 sys 0m6.033s 00:24:45.153 07:30:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:45.153 07:30:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:45.153 ************************************ 00:24:45.153 END TEST nvmf_multicontroller 00:24:45.153 ************************************ 00:24:45.153 07:30:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.153 07:30:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:45.153 07:30:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:45.153 07:30:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.153 ************************************ 00:24:45.153 START TEST nvmf_aer 00:24:45.153 ************************************ 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.153 * Looking for test storage... 00:24:45.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.153 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.154 07:30:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:51.749 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:51.749 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:51.749 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.749 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:51.750 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:51.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:24:51.750 00:24:51.750 --- 10.0.0.2 ping statistics --- 00:24:51.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.750 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:24:51.750 00:24:51.750 --- 10.0.0.1 ping statistics --- 00:24:51.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.750 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=183216 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 183216 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 183216 ']' 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:51.750 07:30:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:51.750 [2024-07-25 07:30:58.862061] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:24:51.750 [2024-07-25 07:30:58.862129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.750 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.750 [2024-07-25 07:30:58.934789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.750 [2024-07-25 07:30:59.009388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.750 [2024-07-25 07:30:59.009432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.750 [2024-07-25 07:30:59.009440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.750 [2024-07-25 07:30:59.009446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.750 [2024-07-25 07:30:59.009452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.750 [2024-07-25 07:30:59.009593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.750 [2024-07-25 07:30:59.009724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.750 [2024-07-25 07:30:59.009883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.750 [2024-07-25 07:30:59.009884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.359 [2024-07-25 07:30:59.696215] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.359 Malloc0 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.359 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.621 [2024-07-25 07:30:59.755552] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.621 [ 00:24:52.621 { 00:24:52.621 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:52.621 "subtype": "Discovery", 00:24:52.621 "listen_addresses": [], 00:24:52.621 "allow_any_host": true, 00:24:52.621 "hosts": [] 00:24:52.621 }, 00:24:52.621 { 00:24:52.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.621 "subtype": "NVMe", 00:24:52.621 "listen_addresses": [ 00:24:52.621 { 00:24:52.621 "trtype": "TCP", 00:24:52.621 "adrfam": "IPv4", 00:24:52.621 "traddr": "10.0.0.2", 00:24:52.621 "trsvcid": "4420" 00:24:52.621 } 00:24:52.621 ], 00:24:52.621 "allow_any_host": true, 00:24:52.621 "hosts": [], 00:24:52.621 "serial_number": "SPDK00000000000001", 00:24:52.621 "model_number": "SPDK bdev Controller", 00:24:52.621 "max_namespaces": 2, 00:24:52.621 "min_cntlid": 1, 00:24:52.621 "max_cntlid": 65519, 00:24:52.621 "namespaces": [ 00:24:52.621 { 00:24:52.621 "nsid": 1, 00:24:52.621 "bdev_name": "Malloc0", 00:24:52.621 "name": "Malloc0", 00:24:52.621 "nguid": "25315C38663E4D99AF8571AA1C0C90EC", 00:24:52.621 "uuid": "25315c38-663e-4d99-af85-71aa1c0c90ec" 00:24:52.621 } 00:24:52.621 ] 00:24:52.621 } 00:24:52.621 ] 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=183285 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:52.621 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:52.621 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:52.883 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.883 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:52.883 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:52.883 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:52.883 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.883 07:30:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.883 Malloc1 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.883 [ 00:24:52.883 { 00:24:52.883 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:52.883 "subtype": "Discovery", 00:24:52.883 "listen_addresses": [], 00:24:52.883 "allow_any_host": true, 00:24:52.883 "hosts": [] 00:24:52.883 }, 00:24:52.883 { 00:24:52.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.883 "subtype": "NVMe", 00:24:52.883 "listen_addresses": [ 00:24:52.883 { 00:24:52.883 "trtype": "TCP", 00:24:52.883 "adrfam": "IPv4", 00:24:52.883 "traddr": "10.0.0.2", 00:24:52.883 "trsvcid": "4420" 00:24:52.883 } 00:24:52.883 ], 00:24:52.883 "allow_any_host": true, 00:24:52.883 "hosts": [], 00:24:52.883 "serial_number": "SPDK00000000000001", 00:24:52.883 Asynchronous Event Request test 00:24:52.883 Attaching to 10.0.0.2 00:24:52.883 Attached to 10.0.0.2 00:24:52.883 Registering asynchronous event callbacks... 00:24:52.883 Starting namespace attribute notice tests for all controllers... 00:24:52.883 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:52.883 aer_cb - Changed Namespace 00:24:52.883 Cleaning up... 00:24:52.883 "model_number": "SPDK bdev Controller", 00:24:52.883 "max_namespaces": 2, 00:24:52.883 "min_cntlid": 1, 00:24:52.883 "max_cntlid": 65519, 00:24:52.883 "namespaces": [ 00:24:52.883 { 00:24:52.883 "nsid": 1, 00:24:52.883 "bdev_name": "Malloc0", 00:24:52.883 "name": "Malloc0", 00:24:52.883 "nguid": "25315C38663E4D99AF8571AA1C0C90EC", 00:24:52.883 "uuid": "25315c38-663e-4d99-af85-71aa1c0c90ec" 00:24:52.883 }, 00:24:52.883 { 00:24:52.883 "nsid": 2, 00:24:52.883 "bdev_name": "Malloc1", 00:24:52.883 "name": "Malloc1", 00:24:52.883 "nguid": "56824A55034D4633BDAB4A64A47EB2D9", 00:24:52.883 "uuid": "56824a55-034d-4633-bdab-4a64a47eb2d9" 00:24:52.883 } 00:24:52.883 ] 00:24:52.883 } 00:24:52.883 ] 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 183285 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.883 rmmod nvme_tcp 00:24:52.883 rmmod nvme_fabrics 00:24:52.883 rmmod nvme_keyring 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 183216 ']' 00:24:52.883 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 183216 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 183216 ']' 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 183216 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 183216 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 183216' 00:24:52.884 killing process with pid 183216 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 183216 00:24:52.884 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 183216 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.146 07:31:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:55.693 00:24:55.693 real 0m10.436s 00:24:55.693 user 0m7.228s 00:24:55.693 sys 0m5.519s 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:55.693 ************************************ 00:24:55.693 END TEST nvmf_aer 00:24:55.693 ************************************ 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.693 ************************************ 00:24:55.693 START TEST nvmf_async_init 00:24:55.693 ************************************ 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:55.693 * Looking for test storage... 00:24:55.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.693 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2f1c931873f9435b8b8fa070abeff4f0 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.694 07:31:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:02.287 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.287 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:02.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:02.288 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:02.288 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.288 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.549 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.549 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.549 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:02.549 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.549 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.550 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:02.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:25:02.811 00:25:02.811 --- 10.0.0.2 ping statistics --- 00:25:02.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.811 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.510 ms 00:25:02.811 00:25:02.811 --- 10.0.0.1 ping statistics --- 00:25:02.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.811 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=187603 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 187603 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 187603 ']' 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.811 07:31:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:02.811 [2024-07-25 07:31:10.043282] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:25:02.811 [2024-07-25 07:31:10.043354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.811 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.812 [2024-07-25 07:31:10.116470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.072 [2024-07-25 07:31:10.191724] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.072 [2024-07-25 07:31:10.191764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.072 [2024-07-25 07:31:10.191772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.072 [2024-07-25 07:31:10.191778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.072 [2024-07-25 07:31:10.191784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.072 [2024-07-25 07:31:10.191803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.644 [2024-07-25 07:31:10.866935] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.644 null0 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2f1c931873f9435b8b8fa070abeff4f0 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.644 [2024-07-25 07:31:10.923195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.644 07:31:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.905 nvme0n1 00:25:03.905 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.905 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:03.905 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.905 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.905 [ 00:25:03.905 { 00:25:03.905 "name": "nvme0n1", 00:25:03.905 "aliases": [ 00:25:03.905 "2f1c9318-73f9-435b-8b8f-a070abeff4f0" 00:25:03.905 ], 00:25:03.905 "product_name": "NVMe disk", 00:25:03.905 "block_size": 512, 00:25:03.905 "num_blocks": 2097152, 00:25:03.905 "uuid": "2f1c9318-73f9-435b-8b8f-a070abeff4f0", 00:25:03.905 "assigned_rate_limits": { 00:25:03.905 "rw_ios_per_sec": 0, 00:25:03.905 "rw_mbytes_per_sec": 0, 00:25:03.905 "r_mbytes_per_sec": 0, 00:25:03.905 "w_mbytes_per_sec": 0 00:25:03.905 }, 00:25:03.905 "claimed": false, 00:25:03.905 "zoned": false, 00:25:03.905 "supported_io_types": { 00:25:03.905 "read": true, 00:25:03.905 "write": true, 00:25:03.905 "unmap": false, 00:25:03.905 "flush": true, 00:25:03.905 "reset": true, 00:25:03.905 "nvme_admin": true, 00:25:03.905 "nvme_io": true, 00:25:03.905 "nvme_io_md": false, 00:25:03.905 "write_zeroes": true, 00:25:03.905 "zcopy": false, 00:25:03.905 "get_zone_info": false, 00:25:03.905 "zone_management": false, 00:25:03.905 "zone_append": false, 00:25:03.905 "compare": true, 00:25:03.905 "compare_and_write": true, 00:25:03.905 "abort": true, 00:25:03.905 "seek_hole": false, 00:25:03.905 "seek_data": false, 00:25:03.905 "copy": true, 00:25:03.905 "nvme_iov_md": false 00:25:03.905 }, 00:25:03.905 "memory_domains": [ 00:25:03.905 { 00:25:03.905 "dma_device_id": "system", 00:25:03.905 "dma_device_type": 1 00:25:03.905 } 00:25:03.905 ], 00:25:03.905 "driver_specific": { 00:25:03.905 "nvme": [ 00:25:03.905 { 00:25:03.905 "trid": { 00:25:03.905 "trtype": "TCP", 00:25:03.905 "adrfam": "IPv4", 00:25:03.905 "traddr": "10.0.0.2", 00:25:03.905 "trsvcid": "4420", 00:25:03.905 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:03.905 }, 00:25:03.905 "ctrlr_data": { 00:25:03.905 "cntlid": 1, 00:25:03.905 "vendor_id": "0x8086", 00:25:03.905 "model_number": "SPDK bdev Controller", 00:25:03.905 "serial_number": "00000000000000000000", 00:25:03.906 "firmware_revision": "24.09", 00:25:03.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:03.906 "oacs": { 00:25:03.906 "security": 0, 00:25:03.906 "format": 0, 00:25:03.906 "firmware": 0, 00:25:03.906 "ns_manage": 0 00:25:03.906 }, 00:25:03.906 "multi_ctrlr": true, 00:25:03.906 "ana_reporting": false 00:25:03.906 }, 00:25:03.906 "vs": { 00:25:03.906 "nvme_version": "1.3" 00:25:03.906 }, 00:25:03.906 "ns_data": { 00:25:03.906 "id": 1, 00:25:03.906 "can_share": true 00:25:03.906 } 00:25:03.906 } 00:25:03.906 ], 00:25:03.906 "mp_policy": "active_passive" 00:25:03.906 } 00:25:03.906 } 00:25:03.906 ] 00:25:03.906 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.906 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:03.906 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.906 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:03.906 [2024-07-25 07:31:11.191952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:03.906 [2024-07-25 07:31:11.192017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2032fd0 (9): Bad file descriptor 00:25:04.167 [2024-07-25 07:31:11.324306] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.167 [ 00:25:04.167 { 00:25:04.167 "name": "nvme0n1", 00:25:04.167 "aliases": [ 00:25:04.167 "2f1c9318-73f9-435b-8b8f-a070abeff4f0" 00:25:04.167 ], 00:25:04.167 "product_name": "NVMe disk", 00:25:04.167 "block_size": 512, 00:25:04.167 "num_blocks": 2097152, 00:25:04.167 "uuid": "2f1c9318-73f9-435b-8b8f-a070abeff4f0", 00:25:04.167 "assigned_rate_limits": { 00:25:04.167 "rw_ios_per_sec": 0, 00:25:04.167 "rw_mbytes_per_sec": 0, 00:25:04.167 "r_mbytes_per_sec": 0, 00:25:04.167 "w_mbytes_per_sec": 0 00:25:04.167 }, 00:25:04.167 "claimed": false, 00:25:04.167 "zoned": false, 00:25:04.167 "supported_io_types": { 00:25:04.167 "read": true, 00:25:04.167 "write": true, 00:25:04.167 "unmap": false, 00:25:04.167 "flush": true, 00:25:04.167 "reset": true, 00:25:04.167 "nvme_admin": true, 00:25:04.167 "nvme_io": true, 00:25:04.167 "nvme_io_md": false, 00:25:04.167 "write_zeroes": true, 00:25:04.167 "zcopy": false, 00:25:04.167 "get_zone_info": false, 00:25:04.167 "zone_management": false, 00:25:04.167 "zone_append": false, 00:25:04.167 "compare": true, 00:25:04.167 "compare_and_write": true, 00:25:04.167 "abort": true, 00:25:04.167 "seek_hole": false, 00:25:04.167 "seek_data": false, 00:25:04.167 "copy": true, 00:25:04.167 "nvme_iov_md": false 00:25:04.167 }, 00:25:04.167 "memory_domains": [ 00:25:04.167 { 00:25:04.167 "dma_device_id": "system", 00:25:04.167 "dma_device_type": 1 00:25:04.167 } 00:25:04.167 ], 00:25:04.167 "driver_specific": { 00:25:04.167 "nvme": [ 00:25:04.167 { 00:25:04.167 "trid": { 00:25:04.167 "trtype": "TCP", 00:25:04.167 "adrfam": "IPv4", 00:25:04.167 "traddr": "10.0.0.2", 00:25:04.167 "trsvcid": "4420", 00:25:04.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:04.167 }, 00:25:04.167 "ctrlr_data": { 00:25:04.167 "cntlid": 2, 00:25:04.167 "vendor_id": "0x8086", 00:25:04.167 "model_number": "SPDK bdev Controller", 00:25:04.167 "serial_number": "00000000000000000000", 00:25:04.167 "firmware_revision": "24.09", 00:25:04.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.167 "oacs": { 00:25:04.167 "security": 0, 00:25:04.167 "format": 0, 00:25:04.167 "firmware": 0, 00:25:04.167 "ns_manage": 0 00:25:04.167 }, 00:25:04.167 "multi_ctrlr": true, 00:25:04.167 "ana_reporting": false 00:25:04.167 }, 00:25:04.167 "vs": { 00:25:04.167 "nvme_version": "1.3" 00:25:04.167 }, 00:25:04.167 "ns_data": { 00:25:04.167 "id": 1, 00:25:04.167 "can_share": true 00:25:04.167 } 00:25:04.167 } 00:25:04.167 ], 00:25:04.167 "mp_policy": "active_passive" 00:25:04.167 } 00:25:04.167 } 00:25:04.167 ] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Z98zgiQrd2 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Z98zgiQrd2 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.167 [2024-07-25 07:31:11.404593] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:04.167 [2024-07-25 07:31:11.404719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z98zgiQrd2 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.167 [2024-07-25 07:31:11.416618] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z98zgiQrd2 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.167 [2024-07-25 07:31:11.428668] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:04.167 [2024-07-25 07:31:11.428703] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:04.167 nvme0n1 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.167 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.167 [ 00:25:04.167 { 00:25:04.167 "name": "nvme0n1", 00:25:04.167 "aliases": [ 00:25:04.167 "2f1c9318-73f9-435b-8b8f-a070abeff4f0" 00:25:04.167 ], 00:25:04.167 "product_name": "NVMe disk", 00:25:04.167 "block_size": 512, 00:25:04.167 "num_blocks": 2097152, 00:25:04.167 "uuid": "2f1c9318-73f9-435b-8b8f-a070abeff4f0", 00:25:04.167 "assigned_rate_limits": { 00:25:04.167 "rw_ios_per_sec": 0, 00:25:04.167 "rw_mbytes_per_sec": 0, 00:25:04.167 "r_mbytes_per_sec": 0, 00:25:04.167 "w_mbytes_per_sec": 0 00:25:04.167 }, 00:25:04.167 "claimed": false, 00:25:04.167 "zoned": false, 00:25:04.167 "supported_io_types": { 00:25:04.167 "read": true, 00:25:04.167 "write": true, 00:25:04.167 "unmap": false, 00:25:04.167 "flush": true, 00:25:04.167 "reset": true, 00:25:04.167 "nvme_admin": true, 00:25:04.167 "nvme_io": true, 00:25:04.167 "nvme_io_md": false, 00:25:04.167 "write_zeroes": true, 00:25:04.167 "zcopy": false, 00:25:04.167 "get_zone_info": false, 00:25:04.167 "zone_management": false, 00:25:04.167 "zone_append": false, 00:25:04.167 "compare": true, 00:25:04.167 "compare_and_write": true, 00:25:04.167 "abort": true, 00:25:04.167 "seek_hole": false, 00:25:04.167 "seek_data": false, 00:25:04.167 "copy": true, 00:25:04.167 "nvme_iov_md": false 00:25:04.167 }, 00:25:04.167 "memory_domains": [ 00:25:04.167 { 00:25:04.167 "dma_device_id": "system", 00:25:04.167 "dma_device_type": 1 00:25:04.168 } 00:25:04.168 ], 00:25:04.168 "driver_specific": { 00:25:04.168 "nvme": [ 00:25:04.168 { 00:25:04.168 "trid": { 00:25:04.168 "trtype": "TCP", 00:25:04.168 "adrfam": "IPv4", 00:25:04.168 "traddr": "10.0.0.2", 00:25:04.168 "trsvcid": "4421", 00:25:04.168 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:04.168 }, 00:25:04.168 "ctrlr_data": { 00:25:04.168 "cntlid": 3, 00:25:04.168 "vendor_id": "0x8086", 00:25:04.168 "model_number": "SPDK bdev Controller", 00:25:04.168 "serial_number": "00000000000000000000", 00:25:04.168 "firmware_revision": "24.09", 00:25:04.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.168 "oacs": { 00:25:04.168 "security": 0, 00:25:04.168 "format": 0, 00:25:04.168 "firmware": 0, 00:25:04.168 "ns_manage": 0 00:25:04.168 }, 00:25:04.168 "multi_ctrlr": true, 00:25:04.168 "ana_reporting": false 00:25:04.168 }, 00:25:04.168 "vs": { 00:25:04.168 "nvme_version": "1.3" 00:25:04.168 }, 00:25:04.168 "ns_data": { 00:25:04.168 "id": 1, 00:25:04.168 "can_share": true 00:25:04.168 } 00:25:04.168 } 00:25:04.168 ], 00:25:04.168 "mp_policy": "active_passive" 00:25:04.168 } 00:25:04.168 } 00:25:04.168 ] 00:25:04.168 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.168 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.168 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.168 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Z98zgiQrd2 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:04.429 rmmod nvme_tcp 00:25:04.429 rmmod nvme_fabrics 00:25:04.429 rmmod nvme_keyring 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 187603 ']' 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 187603 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 187603 ']' 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 187603 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 187603 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 187603' 00:25:04.429 killing process with pid 187603 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 187603 00:25:04.429 [2024-07-25 07:31:11.666621] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:04.429 [2024-07-25 07:31:11.666647] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 187603 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.429 07:31:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:06.976 00:25:06.976 real 0m11.327s 00:25:06.976 user 0m3.990s 00:25:06.976 sys 0m5.828s 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.976 ************************************ 00:25:06.976 END TEST nvmf_async_init 00:25:06.976 ************************************ 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.976 ************************************ 00:25:06.976 START TEST dma 00:25:06.976 ************************************ 00:25:06.976 07:31:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:06.976 * Looking for test storage... 00:25:06.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:06.976 00:25:06.976 real 0m0.133s 00:25:06.976 user 0m0.050s 00:25:06.976 sys 0m0.092s 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:06.976 ************************************ 00:25:06.976 END TEST dma 00:25:06.976 ************************************ 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.976 ************************************ 00:25:06.976 START TEST nvmf_identify 00:25:06.976 ************************************ 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:06.976 * Looking for test storage... 00:25:06.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.976 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:06.977 07:31:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:15.124 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:15.124 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.124 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:15.125 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:15.125 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:15.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:25:15.125 00:25:15.125 --- 10.0.0.2 ping statistics --- 00:25:15.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.125 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:25:15.125 00:25:15.125 --- 10.0.0.1 ping statistics --- 00:25:15.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.125 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=192154 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 192154 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 192154 ']' 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.125 07:31:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.125 [2024-07-25 07:31:21.720520] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:25:15.125 [2024-07-25 07:31:21.720586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.125 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.125 [2024-07-25 07:31:21.793376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.125 [2024-07-25 07:31:21.870095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.125 [2024-07-25 07:31:21.870136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.125 [2024-07-25 07:31:21.870143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.125 [2024-07-25 07:31:21.870150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.125 [2024-07-25 07:31:21.870155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.125 [2024-07-25 07:31:21.870246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.125 [2024-07-25 07:31:21.870462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.125 [2024-07-25 07:31:21.870463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.125 [2024-07-25 07:31:21.870305] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 [2024-07-25 07:31:22.518130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 Malloc0 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.387 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.388 [2024-07-25 07:31:22.617581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.388 [ 00:25:15.388 { 00:25:15.388 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:15.388 "subtype": "Discovery", 00:25:15.388 "listen_addresses": [ 00:25:15.388 { 00:25:15.388 "trtype": "TCP", 00:25:15.388 "adrfam": "IPv4", 00:25:15.388 "traddr": "10.0.0.2", 00:25:15.388 "trsvcid": "4420" 00:25:15.388 } 00:25:15.388 ], 00:25:15.388 "allow_any_host": true, 00:25:15.388 "hosts": [] 00:25:15.388 }, 00:25:15.388 { 00:25:15.388 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.388 "subtype": "NVMe", 00:25:15.388 "listen_addresses": [ 00:25:15.388 { 00:25:15.388 "trtype": "TCP", 00:25:15.388 "adrfam": "IPv4", 00:25:15.388 "traddr": "10.0.0.2", 00:25:15.388 "trsvcid": "4420" 00:25:15.388 } 00:25:15.388 ], 00:25:15.388 "allow_any_host": true, 00:25:15.388 "hosts": [], 00:25:15.388 "serial_number": "SPDK00000000000001", 00:25:15.388 "model_number": "SPDK bdev Controller", 00:25:15.388 "max_namespaces": 32, 00:25:15.388 "min_cntlid": 1, 00:25:15.388 "max_cntlid": 65519, 00:25:15.388 "namespaces": [ 00:25:15.388 { 00:25:15.388 "nsid": 1, 00:25:15.388 "bdev_name": "Malloc0", 00:25:15.388 "name": "Malloc0", 00:25:15.388 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:15.388 "eui64": "ABCDEF0123456789", 00:25:15.388 "uuid": "785d71fb-f812-4a79-8388-a060d5328508" 00:25:15.388 } 00:25:15.388 ] 00:25:15.388 } 00:25:15.388 ] 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.388 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:15.388 [2024-07-25 07:31:22.679670] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:25:15.388 [2024-07-25 07:31:22.679723] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192352 ] 00:25:15.388 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.388 [2024-07-25 07:31:22.712857] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:15.388 [2024-07-25 07:31:22.712905] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:15.388 [2024-07-25 07:31:22.712910] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:15.388 [2024-07-25 07:31:22.712922] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:15.388 [2024-07-25 07:31:22.712931] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:15.388 [2024-07-25 07:31:22.716230] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:15.388 [2024-07-25 07:31:22.716257] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2125ec0 0 00:25:15.388 [2024-07-25 07:31:22.724208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:15.388 [2024-07-25 07:31:22.724225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:15.388 [2024-07-25 07:31:22.724230] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:15.388 [2024-07-25 07:31:22.724234] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:15.388 [2024-07-25 07:31:22.724270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.724276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.724280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.388 [2024-07-25 07:31:22.724293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:15.388 [2024-07-25 07:31:22.724311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.388 [2024-07-25 07:31:22.732210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.388 [2024-07-25 07:31:22.732219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.388 [2024-07-25 07:31:22.732223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.388 [2024-07-25 07:31:22.732236] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:15.388 [2024-07-25 07:31:22.732243] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:15.388 [2024-07-25 07:31:22.732248] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:15.388 [2024-07-25 07:31:22.732261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.388 [2024-07-25 07:31:22.732276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.388 [2024-07-25 07:31:22.732289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.388 [2024-07-25 07:31:22.732531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.388 [2024-07-25 07:31:22.732539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.388 [2024-07-25 07:31:22.732543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.388 [2024-07-25 07:31:22.732558] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:15.388 [2024-07-25 07:31:22.732566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:15.388 [2024-07-25 07:31:22.732573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.388 [2024-07-25 07:31:22.732588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.388 [2024-07-25 07:31:22.732599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.388 [2024-07-25 07:31:22.732725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.388 [2024-07-25 07:31:22.732731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.388 [2024-07-25 07:31:22.732735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.388 [2024-07-25 07:31:22.732743] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:15.388 [2024-07-25 07:31:22.732751] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:15.388 [2024-07-25 07:31:22.732757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.388 [2024-07-25 07:31:22.732771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.388 [2024-07-25 07:31:22.732782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.388 [2024-07-25 07:31:22.732982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.388 [2024-07-25 07:31:22.732988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.388 [2024-07-25 07:31:22.732992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.732995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.388 [2024-07-25 07:31:22.733000] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:15.388 [2024-07-25 07:31:22.733009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.733013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.733016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.388 [2024-07-25 07:31:22.733023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.388 [2024-07-25 07:31:22.733033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.388 [2024-07-25 07:31:22.733222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.388 [2024-07-25 07:31:22.733229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.388 [2024-07-25 07:31:22.733232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.388 [2024-07-25 07:31:22.733236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.388 [2024-07-25 07:31:22.733241] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:15.388 [2024-07-25 07:31:22.733246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:15.388 [2024-07-25 07:31:22.733256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:15.388 [2024-07-25 07:31:22.733361] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:15.389 [2024-07-25 07:31:22.733366] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:15.389 [2024-07-25 07:31:22.733374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.733377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.733381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.389 [2024-07-25 07:31:22.733388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.389 [2024-07-25 07:31:22.733399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.389 [2024-07-25 07:31:22.733694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.389 [2024-07-25 07:31:22.733701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.389 [2024-07-25 07:31:22.733704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.733708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.389 [2024-07-25 07:31:22.733712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:15.389 [2024-07-25 07:31:22.733722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.733725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.733729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.389 [2024-07-25 07:31:22.733735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.389 [2024-07-25 07:31:22.733745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.389 [2024-07-25 07:31:22.733887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.389 [2024-07-25 07:31:22.733893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.389 [2024-07-25 07:31:22.733896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.733900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.389 [2024-07-25 07:31:22.733904] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:15.389 [2024-07-25 07:31:22.733909] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:15.389 [2024-07-25 07:31:22.733916] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:15.389 [2024-07-25 07:31:22.733924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:15.389 [2024-07-25 07:31:22.733933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.733937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.389 [2024-07-25 07:31:22.733944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.389 [2024-07-25 07:31:22.733954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.389 [2024-07-25 07:31:22.734147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.389 [2024-07-25 07:31:22.734156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.389 [2024-07-25 07:31:22.734159] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.734163] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2125ec0): datao=0, datal=4096, cccid=0 00:25:15.389 [2024-07-25 07:31:22.734168] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a8e40) on tqpair(0x2125ec0): expected_datao=0, payload_size=4096 00:25:15.389 [2024-07-25 07:31:22.734172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.734240] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.389 [2024-07-25 07:31:22.734246] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.655 [2024-07-25 07:31:22.775419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.655 [2024-07-25 07:31:22.775423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.655 [2024-07-25 07:31:22.775437] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:15.655 [2024-07-25 07:31:22.775441] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:15.655 [2024-07-25 07:31:22.775446] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:15.655 [2024-07-25 07:31:22.775451] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:15.655 [2024-07-25 07:31:22.775455] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:15.655 [2024-07-25 07:31:22.775460] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:15.655 [2024-07-25 07:31:22.775468] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:15.655 [2024-07-25 07:31:22.775480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.775496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.655 [2024-07-25 07:31:22.775509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.655 [2024-07-25 07:31:22.775645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.655 [2024-07-25 07:31:22.775651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.655 [2024-07-25 07:31:22.775655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.655 [2024-07-25 07:31:22.775666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.775679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.655 [2024-07-25 07:31:22.775686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.775702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.655 [2024-07-25 07:31:22.775708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.775721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.655 [2024-07-25 07:31:22.775727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.775739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.655 [2024-07-25 07:31:22.775744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:15.655 [2024-07-25 07:31:22.775754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:15.655 [2024-07-25 07:31:22.775761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.775771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.655 [2024-07-25 07:31:22.775783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8e40, cid 0, qid 0 00:25:15.655 [2024-07-25 07:31:22.775789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a8fc0, cid 1, qid 0 00:25:15.655 [2024-07-25 07:31:22.775793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a9140, cid 2, qid 0 00:25:15.655 [2024-07-25 07:31:22.775798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.655 [2024-07-25 07:31:22.775803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a9440, cid 4, qid 0 00:25:15.655 [2024-07-25 07:31:22.775982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.655 [2024-07-25 07:31:22.775988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.655 [2024-07-25 07:31:22.775992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.775995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a9440) on tqpair=0x2125ec0 00:25:15.655 [2024-07-25 07:31:22.776001] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:15.655 [2024-07-25 07:31:22.776005] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:15.655 [2024-07-25 07:31:22.776016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.776020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.776027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.655 [2024-07-25 07:31:22.776037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a9440, cid 4, qid 0 00:25:15.655 [2024-07-25 07:31:22.780210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.655 [2024-07-25 07:31:22.780218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.655 [2024-07-25 07:31:22.780221] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780225] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2125ec0): datao=0, datal=4096, cccid=4 00:25:15.655 [2024-07-25 07:31:22.780233] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a9440) on tqpair(0x2125ec0): expected_datao=0, payload_size=4096 00:25:15.655 [2024-07-25 07:31:22.780237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780244] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780248] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.655 [2024-07-25 07:31:22.780260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.655 [2024-07-25 07:31:22.780263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a9440) on tqpair=0x2125ec0 00:25:15.655 [2024-07-25 07:31:22.780278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:15.655 [2024-07-25 07:31:22.780301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.780311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.655 [2024-07-25 07:31:22.780318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2125ec0) 00:25:15.655 [2024-07-25 07:31:22.780331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.655 [2024-07-25 07:31:22.780346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a9440, cid 4, qid 0 00:25:15.655 [2024-07-25 07:31:22.780351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a95c0, cid 5, qid 0 00:25:15.655 [2024-07-25 07:31:22.780607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.655 [2024-07-25 07:31:22.780614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.655 [2024-07-25 07:31:22.780617] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780621] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2125ec0): datao=0, datal=1024, cccid=4 00:25:15.655 [2024-07-25 07:31:22.780625] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a9440) on tqpair(0x2125ec0): expected_datao=0, payload_size=1024 00:25:15.655 [2024-07-25 07:31:22.780629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.655 [2024-07-25 07:31:22.780636] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.780639] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.780645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.656 [2024-07-25 07:31:22.780651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.656 [2024-07-25 07:31:22.780654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.780658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a95c0) on tqpair=0x2125ec0 00:25:15.656 [2024-07-25 07:31:22.821415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.656 [2024-07-25 07:31:22.821426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.656 [2024-07-25 07:31:22.821430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a9440) on tqpair=0x2125ec0 00:25:15.656 [2024-07-25 07:31:22.821449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2125ec0) 00:25:15.656 [2024-07-25 07:31:22.821466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-07-25 07:31:22.821482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a9440, cid 4, qid 0 00:25:15.656 [2024-07-25 07:31:22.821677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.656 [2024-07-25 07:31:22.821683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.656 [2024-07-25 07:31:22.821687] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821690] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2125ec0): datao=0, datal=3072, cccid=4 00:25:15.656 [2024-07-25 07:31:22.821694] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a9440) on tqpair(0x2125ec0): expected_datao=0, payload_size=3072 00:25:15.656 [2024-07-25 07:31:22.821699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821705] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821709] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.656 [2024-07-25 07:31:22.821851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.656 [2024-07-25 07:31:22.821854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a9440) on tqpair=0x2125ec0 00:25:15.656 [2024-07-25 07:31:22.821866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.821870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2125ec0) 00:25:15.656 [2024-07-25 07:31:22.821876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-07-25 07:31:22.821889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a9440, cid 4, qid 0 00:25:15.656 [2024-07-25 07:31:22.822142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.656 [2024-07-25 07:31:22.822148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.656 [2024-07-25 07:31:22.822151] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.822155] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2125ec0): datao=0, datal=8, cccid=4 00:25:15.656 [2024-07-25 07:31:22.822159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21a9440) on tqpair(0x2125ec0): expected_datao=0, payload_size=8 00:25:15.656 [2024-07-25 07:31:22.822163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.822170] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.822173] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.862431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.656 [2024-07-25 07:31:22.862444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.656 [2024-07-25 07:31:22.862447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.656 [2024-07-25 07:31:22.862451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a9440) on tqpair=0x2125ec0 00:25:15.656 ===================================================== 00:25:15.656 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:15.656 ===================================================== 00:25:15.656 Controller Capabilities/Features 00:25:15.656 ================================ 00:25:15.656 Vendor ID: 0000 00:25:15.656 Subsystem Vendor ID: 0000 00:25:15.656 Serial Number: .................... 00:25:15.656 Model Number: ........................................ 00:25:15.656 Firmware Version: 24.09 00:25:15.656 Recommended Arb Burst: 0 00:25:15.656 IEEE OUI Identifier: 00 00 00 00:25:15.656 Multi-path I/O 00:25:15.656 May have multiple subsystem ports: No 00:25:15.656 May have multiple controllers: No 00:25:15.656 Associated with SR-IOV VF: No 00:25:15.656 Max Data Transfer Size: 131072 00:25:15.656 Max Number of Namespaces: 0 00:25:15.656 Max Number of I/O Queues: 1024 00:25:15.656 NVMe Specification Version (VS): 1.3 00:25:15.656 NVMe Specification Version (Identify): 1.3 00:25:15.656 Maximum Queue Entries: 128 00:25:15.656 Contiguous Queues Required: Yes 00:25:15.656 Arbitration Mechanisms Supported 00:25:15.656 Weighted Round Robin: Not Supported 00:25:15.656 Vendor Specific: Not Supported 00:25:15.656 Reset Timeout: 15000 ms 00:25:15.656 Doorbell Stride: 4 bytes 00:25:15.656 NVM Subsystem Reset: Not Supported 00:25:15.656 Command Sets Supported 00:25:15.656 NVM Command Set: Supported 00:25:15.656 Boot Partition: Not Supported 00:25:15.656 Memory Page Size Minimum: 4096 bytes 00:25:15.656 Memory Page Size Maximum: 4096 bytes 00:25:15.656 Persistent Memory Region: Not Supported 00:25:15.656 Optional Asynchronous Events Supported 00:25:15.656 Namespace Attribute Notices: Not Supported 00:25:15.656 Firmware Activation Notices: Not Supported 00:25:15.656 ANA Change Notices: Not Supported 00:25:15.656 PLE Aggregate Log Change Notices: Not Supported 00:25:15.656 LBA Status Info Alert Notices: Not Supported 00:25:15.656 EGE Aggregate Log Change Notices: Not Supported 00:25:15.656 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.656 Zone Descriptor Change Notices: Not Supported 00:25:15.656 Discovery Log Change Notices: Supported 00:25:15.656 Controller Attributes 00:25:15.656 128-bit Host Identifier: Not Supported 00:25:15.656 Non-Operational Permissive Mode: Not Supported 00:25:15.656 NVM Sets: Not Supported 00:25:15.656 Read Recovery Levels: Not Supported 00:25:15.656 Endurance Groups: Not Supported 00:25:15.656 Predictable Latency Mode: Not Supported 00:25:15.656 Traffic Based Keep ALive: Not Supported 00:25:15.656 Namespace Granularity: Not Supported 00:25:15.656 SQ Associations: Not Supported 00:25:15.656 UUID List: Not Supported 00:25:15.656 Multi-Domain Subsystem: Not Supported 00:25:15.656 Fixed Capacity Management: Not Supported 00:25:15.656 Variable Capacity Management: Not Supported 00:25:15.656 Delete Endurance Group: Not Supported 00:25:15.656 Delete NVM Set: Not Supported 00:25:15.656 Extended LBA Formats Supported: Not Supported 00:25:15.656 Flexible Data Placement Supported: Not Supported 00:25:15.656 00:25:15.656 Controller Memory Buffer Support 00:25:15.656 ================================ 00:25:15.656 Supported: No 00:25:15.656 00:25:15.656 Persistent Memory Region Support 00:25:15.656 ================================ 00:25:15.656 Supported: No 00:25:15.656 00:25:15.656 Admin Command Set Attributes 00:25:15.656 ============================ 00:25:15.656 Security Send/Receive: Not Supported 00:25:15.656 Format NVM: Not Supported 00:25:15.656 Firmware Activate/Download: Not Supported 00:25:15.656 Namespace Management: Not Supported 00:25:15.656 Device Self-Test: Not Supported 00:25:15.656 Directives: Not Supported 00:25:15.656 NVMe-MI: Not Supported 00:25:15.656 Virtualization Management: Not Supported 00:25:15.656 Doorbell Buffer Config: Not Supported 00:25:15.656 Get LBA Status Capability: Not Supported 00:25:15.656 Command & Feature Lockdown Capability: Not Supported 00:25:15.656 Abort Command Limit: 1 00:25:15.656 Async Event Request Limit: 4 00:25:15.656 Number of Firmware Slots: N/A 00:25:15.656 Firmware Slot 1 Read-Only: N/A 00:25:15.656 Firmware Activation Without Reset: N/A 00:25:15.656 Multiple Update Detection Support: N/A 00:25:15.656 Firmware Update Granularity: No Information Provided 00:25:15.656 Per-Namespace SMART Log: No 00:25:15.656 Asymmetric Namespace Access Log Page: Not Supported 00:25:15.656 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:15.656 Command Effects Log Page: Not Supported 00:25:15.656 Get Log Page Extended Data: Supported 00:25:15.656 Telemetry Log Pages: Not Supported 00:25:15.656 Persistent Event Log Pages: Not Supported 00:25:15.656 Supported Log Pages Log Page: May Support 00:25:15.656 Commands Supported & Effects Log Page: Not Supported 00:25:15.656 Feature Identifiers & Effects Log Page:May Support 00:25:15.656 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.656 Data Area 4 for Telemetry Log: Not Supported 00:25:15.656 Error Log Page Entries Supported: 128 00:25:15.656 Keep Alive: Not Supported 00:25:15.656 00:25:15.656 NVM Command Set Attributes 00:25:15.656 ========================== 00:25:15.656 Submission Queue Entry Size 00:25:15.657 Max: 1 00:25:15.657 Min: 1 00:25:15.657 Completion Queue Entry Size 00:25:15.657 Max: 1 00:25:15.657 Min: 1 00:25:15.657 Number of Namespaces: 0 00:25:15.657 Compare Command: Not Supported 00:25:15.657 Write Uncorrectable Command: Not Supported 00:25:15.657 Dataset Management Command: Not Supported 00:25:15.657 Write Zeroes Command: Not Supported 00:25:15.657 Set Features Save Field: Not Supported 00:25:15.657 Reservations: Not Supported 00:25:15.657 Timestamp: Not Supported 00:25:15.657 Copy: Not Supported 00:25:15.657 Volatile Write Cache: Not Present 00:25:15.657 Atomic Write Unit (Normal): 1 00:25:15.657 Atomic Write Unit (PFail): 1 00:25:15.657 Atomic Compare & Write Unit: 1 00:25:15.657 Fused Compare & Write: Supported 00:25:15.657 Scatter-Gather List 00:25:15.657 SGL Command Set: Supported 00:25:15.657 SGL Keyed: Supported 00:25:15.657 SGL Bit Bucket Descriptor: Not Supported 00:25:15.657 SGL Metadata Pointer: Not Supported 00:25:15.657 Oversized SGL: Not Supported 00:25:15.657 SGL Metadata Address: Not Supported 00:25:15.657 SGL Offset: Supported 00:25:15.657 Transport SGL Data Block: Not Supported 00:25:15.657 Replay Protected Memory Block: Not Supported 00:25:15.657 00:25:15.657 Firmware Slot Information 00:25:15.657 ========================= 00:25:15.657 Active slot: 0 00:25:15.657 00:25:15.657 00:25:15.657 Error Log 00:25:15.657 ========= 00:25:15.657 00:25:15.657 Active Namespaces 00:25:15.657 ================= 00:25:15.657 Discovery Log Page 00:25:15.657 ================== 00:25:15.657 Generation Counter: 2 00:25:15.657 Number of Records: 2 00:25:15.657 Record Format: 0 00:25:15.657 00:25:15.657 Discovery Log Entry 0 00:25:15.657 ---------------------- 00:25:15.657 Transport Type: 3 (TCP) 00:25:15.657 Address Family: 1 (IPv4) 00:25:15.657 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:15.657 Entry Flags: 00:25:15.657 Duplicate Returned Information: 1 00:25:15.657 Explicit Persistent Connection Support for Discovery: 1 00:25:15.657 Transport Requirements: 00:25:15.657 Secure Channel: Not Required 00:25:15.657 Port ID: 0 (0x0000) 00:25:15.657 Controller ID: 65535 (0xffff) 00:25:15.657 Admin Max SQ Size: 128 00:25:15.657 Transport Service Identifier: 4420 00:25:15.657 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:15.657 Transport Address: 10.0.0.2 00:25:15.657 Discovery Log Entry 1 00:25:15.657 ---------------------- 00:25:15.657 Transport Type: 3 (TCP) 00:25:15.657 Address Family: 1 (IPv4) 00:25:15.657 Subsystem Type: 2 (NVM Subsystem) 00:25:15.657 Entry Flags: 00:25:15.657 Duplicate Returned Information: 0 00:25:15.657 Explicit Persistent Connection Support for Discovery: 0 00:25:15.657 Transport Requirements: 00:25:15.657 Secure Channel: Not Required 00:25:15.657 Port ID: 0 (0x0000) 00:25:15.657 Controller ID: 65535 (0xffff) 00:25:15.657 Admin Max SQ Size: 128 00:25:15.657 Transport Service Identifier: 4420 00:25:15.657 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:15.657 Transport Address: 10.0.0.2 [2024-07-25 07:31:22.862534] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:15.657 [2024-07-25 07:31:22.862544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8e40) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.862550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.657 [2024-07-25 07:31:22.862555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a8fc0) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.862560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.657 [2024-07-25 07:31:22.862567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a9140) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.657 [2024-07-25 07:31:22.862576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.862581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.657 [2024-07-25 07:31:22.862591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.862595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.862599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.657 [2024-07-25 07:31:22.862606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-07-25 07:31:22.862620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.657 [2024-07-25 07:31:22.862937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.657 [2024-07-25 07:31:22.862943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.657 [2024-07-25 07:31:22.862947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.862951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.862958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.862961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.862965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.657 [2024-07-25 07:31:22.862972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-07-25 07:31:22.862985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.657 [2024-07-25 07:31:22.863133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.657 [2024-07-25 07:31:22.863139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.657 [2024-07-25 07:31:22.863143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.863151] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:15.657 [2024-07-25 07:31:22.863156] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:15.657 [2024-07-25 07:31:22.863165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.657 [2024-07-25 07:31:22.863179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-07-25 07:31:22.863189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.657 [2024-07-25 07:31:22.863365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.657 [2024-07-25 07:31:22.863372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.657 [2024-07-25 07:31:22.863375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.863389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.657 [2024-07-25 07:31:22.863406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-07-25 07:31:22.863417] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.657 [2024-07-25 07:31:22.863587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.657 [2024-07-25 07:31:22.863593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.657 [2024-07-25 07:31:22.863597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.863610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.657 [2024-07-25 07:31:22.863624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-07-25 07:31:22.863635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.657 [2024-07-25 07:31:22.863839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.657 [2024-07-25 07:31:22.863845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.657 [2024-07-25 07:31:22.863849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.657 [2024-07-25 07:31:22.863862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.657 [2024-07-25 07:31:22.863869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.657 [2024-07-25 07:31:22.863876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-07-25 07:31:22.863885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.658 [2024-07-25 07:31:22.864041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.864047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.864050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.864054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.658 [2024-07-25 07:31:22.864063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.864067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.864070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.658 [2024-07-25 07:31:22.864077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-07-25 07:31:22.864087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.658 [2024-07-25 07:31:22.868209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.868218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.868221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.868225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.658 [2024-07-25 07:31:22.868235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.868239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.868242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2125ec0) 00:25:15.658 [2024-07-25 07:31:22.868249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-07-25 07:31:22.868264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21a92c0, cid 3, qid 0 00:25:15.658 [2024-07-25 07:31:22.868490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.868496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.868499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.868503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21a92c0) on tqpair=0x2125ec0 00:25:15.658 [2024-07-25 07:31:22.868510] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:15.658 00:25:15.658 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:15.658 [2024-07-25 07:31:22.910891] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:25:15.658 [2024-07-25 07:31:22.910960] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192354 ] 00:25:15.658 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.658 [2024-07-25 07:31:22.943744] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:15.658 [2024-07-25 07:31:22.943788] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:15.658 [2024-07-25 07:31:22.943793] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:15.658 [2024-07-25 07:31:22.943806] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:15.658 [2024-07-25 07:31:22.943814] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:15.658 [2024-07-25 07:31:22.947223] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:15.658 [2024-07-25 07:31:22.947248] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a35ec0 0 00:25:15.658 [2024-07-25 07:31:22.955209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:15.658 [2024-07-25 07:31:22.955225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:15.658 [2024-07-25 07:31:22.955230] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:15.658 [2024-07-25 07:31:22.955233] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:15.658 [2024-07-25 07:31:22.955267] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.955272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.955276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.658 [2024-07-25 07:31:22.955288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:15.658 [2024-07-25 07:31:22.955305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.658 [2024-07-25 07:31:22.963208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.963217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.963221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.658 [2024-07-25 07:31:22.963236] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:15.658 [2024-07-25 07:31:22.963246] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:15.658 [2024-07-25 07:31:22.963251] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:15.658 [2024-07-25 07:31:22.963262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.658 [2024-07-25 07:31:22.963277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-07-25 07:31:22.963290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.658 [2024-07-25 07:31:22.963506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.963513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.963516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.658 [2024-07-25 07:31:22.963528] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:15.658 [2024-07-25 07:31:22.963536] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:15.658 [2024-07-25 07:31:22.963543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.658 [2024-07-25 07:31:22.963557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-07-25 07:31:22.963568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.658 [2024-07-25 07:31:22.963774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.963780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.963783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.658 [2024-07-25 07:31:22.963792] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:15.658 [2024-07-25 07:31:22.963800] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:15.658 [2024-07-25 07:31:22.963806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.963813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.658 [2024-07-25 07:31:22.963820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-07-25 07:31:22.963830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.658 [2024-07-25 07:31:22.964059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.964065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.964069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.964072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.658 [2024-07-25 07:31:22.964077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:15.658 [2024-07-25 07:31:22.964089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.964093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.964096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.658 [2024-07-25 07:31:22.964103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-07-25 07:31:22.964114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.658 [2024-07-25 07:31:22.964340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.658 [2024-07-25 07:31:22.964347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.658 [2024-07-25 07:31:22.964351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.658 [2024-07-25 07:31:22.964354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.658 [2024-07-25 07:31:22.964359] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:15.658 [2024-07-25 07:31:22.964364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:15.658 [2024-07-25 07:31:22.964371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:15.658 [2024-07-25 07:31:22.964477] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:15.658 [2024-07-25 07:31:22.964480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:15.658 [2024-07-25 07:31:22.964488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.964492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.964495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.964502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.659 [2024-07-25 07:31:22.964513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.659 [2024-07-25 07:31:22.964743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.659 [2024-07-25 07:31:22.964750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.659 [2024-07-25 07:31:22.964753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.964757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.659 [2024-07-25 07:31:22.964761] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:15.659 [2024-07-25 07:31:22.964771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.964774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.964778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.964785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.659 [2024-07-25 07:31:22.964795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.659 [2024-07-25 07:31:22.965011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.659 [2024-07-25 07:31:22.965017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.659 [2024-07-25 07:31:22.965021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965024] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.659 [2024-07-25 07:31:22.965029] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:15.659 [2024-07-25 07:31:22.965036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:15.659 [2024-07-25 07:31:22.965043] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:15.659 [2024-07-25 07:31:22.965056] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:15.659 [2024-07-25 07:31:22.965064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.965075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.659 [2024-07-25 07:31:22.965086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.659 [2024-07-25 07:31:22.965336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.659 [2024-07-25 07:31:22.965344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.659 [2024-07-25 07:31:22.965347] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965351] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=4096, cccid=0 00:25:15.659 [2024-07-25 07:31:22.965356] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab8e40) on tqpair(0x1a35ec0): expected_datao=0, payload_size=4096 00:25:15.659 [2024-07-25 07:31:22.965360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965367] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965371] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.659 [2024-07-25 07:31:22.965512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.659 [2024-07-25 07:31:22.965515] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.659 [2024-07-25 07:31:22.965526] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:15.659 [2024-07-25 07:31:22.965531] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:15.659 [2024-07-25 07:31:22.965535] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:15.659 [2024-07-25 07:31:22.965539] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:15.659 [2024-07-25 07:31:22.965544] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:15.659 [2024-07-25 07:31:22.965548] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:15.659 [2024-07-25 07:31:22.965556] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:15.659 [2024-07-25 07:31:22.965566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.965581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.659 [2024-07-25 07:31:22.965593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.659 [2024-07-25 07:31:22.965826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.659 [2024-07-25 07:31:22.965836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.659 [2024-07-25 07:31:22.965839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.659 [2024-07-25 07:31:22.965850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.965863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.659 [2024-07-25 07:31:22.965870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.965883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.659 [2024-07-25 07:31:22.965889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.965902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.659 [2024-07-25 07:31:22.965907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.965920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.659 [2024-07-25 07:31:22.965925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:15.659 [2024-07-25 07:31:22.965935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:15.659 [2024-07-25 07:31:22.965941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.659 [2024-07-25 07:31:22.965945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a35ec0) 00:25:15.659 [2024-07-25 07:31:22.965952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.659 [2024-07-25 07:31:22.965964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8e40, cid 0, qid 0 00:25:15.659 [2024-07-25 07:31:22.965969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8fc0, cid 1, qid 0 00:25:15.659 [2024-07-25 07:31:22.965974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9140, cid 2, qid 0 00:25:15.659 [2024-07-25 07:31:22.965978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab92c0, cid 3, qid 0 00:25:15.660 [2024-07-25 07:31:22.965983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9440, cid 4, qid 0 00:25:15.660 [2024-07-25 07:31:22.966246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.660 [2024-07-25 07:31:22.966253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.660 [2024-07-25 07:31:22.966256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9440) on tqpair=0x1a35ec0 00:25:15.660 [2024-07-25 07:31:22.966265] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:15.660 [2024-07-25 07:31:22.966272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.966282] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.966289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.966295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a35ec0) 00:25:15.660 [2024-07-25 07:31:22.966309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.660 [2024-07-25 07:31:22.966320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9440, cid 4, qid 0 00:25:15.660 [2024-07-25 07:31:22.966554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.660 [2024-07-25 07:31:22.966560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.660 [2024-07-25 07:31:22.966563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9440) on tqpair=0x1a35ec0 00:25:15.660 [2024-07-25 07:31:22.966633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.966642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.966649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a35ec0) 00:25:15.660 [2024-07-25 07:31:22.966659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.660 [2024-07-25 07:31:22.966670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9440, cid 4, qid 0 00:25:15.660 [2024-07-25 07:31:22.966940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.660 [2024-07-25 07:31:22.966947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.660 [2024-07-25 07:31:22.966950] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966954] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=4096, cccid=4 00:25:15.660 [2024-07-25 07:31:22.966958] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab9440) on tqpair(0x1a35ec0): expected_datao=0, payload_size=4096 00:25:15.660 [2024-07-25 07:31:22.966963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966969] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.966973] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.967115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.660 [2024-07-25 07:31:22.967121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.660 [2024-07-25 07:31:22.967124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.967128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9440) on tqpair=0x1a35ec0 00:25:15.660 [2024-07-25 07:31:22.967137] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:15.660 [2024-07-25 07:31:22.967146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.967155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.967166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.967170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a35ec0) 00:25:15.660 [2024-07-25 07:31:22.967177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.660 [2024-07-25 07:31:22.967188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9440, cid 4, qid 0 00:25:15.660 [2024-07-25 07:31:22.971207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.660 [2024-07-25 07:31:22.971215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.660 [2024-07-25 07:31:22.971218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971222] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=4096, cccid=4 00:25:15.660 [2024-07-25 07:31:22.971226] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab9440) on tqpair(0x1a35ec0): expected_datao=0, payload_size=4096 00:25:15.660 [2024-07-25 07:31:22.971231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971237] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971241] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.660 [2024-07-25 07:31:22.971252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.660 [2024-07-25 07:31:22.971255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9440) on tqpair=0x1a35ec0 00:25:15.660 [2024-07-25 07:31:22.971272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971281] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a35ec0) 00:25:15.660 [2024-07-25 07:31:22.971298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.660 [2024-07-25 07:31:22.971310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9440, cid 4, qid 0 00:25:15.660 [2024-07-25 07:31:22.971525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.660 [2024-07-25 07:31:22.971531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.660 [2024-07-25 07:31:22.971535] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971538] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=4096, cccid=4 00:25:15.660 [2024-07-25 07:31:22.971542] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab9440) on tqpair(0x1a35ec0): expected_datao=0, payload_size=4096 00:25:15.660 [2024-07-25 07:31:22.971547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971608] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971612] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.660 [2024-07-25 07:31:22.971828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.660 [2024-07-25 07:31:22.971832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9440) on tqpair=0x1a35ec0 00:25:15.660 [2024-07-25 07:31:22.971843] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971861] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971884] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:15.660 [2024-07-25 07:31:22.971888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:15.660 [2024-07-25 07:31:22.971893] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:15.660 [2024-07-25 07:31:22.971906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a35ec0) 00:25:15.660 [2024-07-25 07:31:22.971917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.660 [2024-07-25 07:31:22.971924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.971931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a35ec0) 00:25:15.660 [2024-07-25 07:31:22.971937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.660 [2024-07-25 07:31:22.971951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9440, cid 4, qid 0 00:25:15.660 [2024-07-25 07:31:22.971956] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab95c0, cid 5, qid 0 00:25:15.660 [2024-07-25 07:31:22.972156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.660 [2024-07-25 07:31:22.972163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.660 [2024-07-25 07:31:22.972166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.660 [2024-07-25 07:31:22.972170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9440) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.972177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.972182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.972186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.972189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab95c0) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.972198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.972207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a35ec0) 00:25:15.661 [2024-07-25 07:31:22.972214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-25 07:31:22.972225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab95c0, cid 5, qid 0 00:25:15.661 [2024-07-25 07:31:22.972479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.972485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.972489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.972495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab95c0) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.972504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.972507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a35ec0) 00:25:15.661 [2024-07-25 07:31:22.972514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-25 07:31:22.972524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab95c0, cid 5, qid 0 00:25:15.661 [2024-07-25 07:31:22.972745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.972751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.972755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.972759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab95c0) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.972768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.972771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a35ec0) 00:25:15.661 [2024-07-25 07:31:22.972778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-25 07:31:22.972787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab95c0, cid 5, qid 0 00:25:15.661 [2024-07-25 07:31:22.973018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.973024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.973028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab95c0) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.973046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a35ec0) 00:25:15.661 [2024-07-25 07:31:22.973057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-25 07:31:22.973064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a35ec0) 00:25:15.661 [2024-07-25 07:31:22.973074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-25 07:31:22.973081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a35ec0) 00:25:15.661 [2024-07-25 07:31:22.973090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-25 07:31:22.973097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a35ec0) 00:25:15.661 [2024-07-25 07:31:22.973107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.661 [2024-07-25 07:31:22.973119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab95c0, cid 5, qid 0 00:25:15.661 [2024-07-25 07:31:22.973124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9440, cid 4, qid 0 00:25:15.661 [2024-07-25 07:31:22.973129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9740, cid 6, qid 0 00:25:15.661 [2024-07-25 07:31:22.973133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab98c0, cid 7, qid 0 00:25:15.661 [2024-07-25 07:31:22.973385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.661 [2024-07-25 07:31:22.973392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.661 [2024-07-25 07:31:22.973396] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=8192, cccid=5 00:25:15.661 [2024-07-25 07:31:22.973404] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab95c0) on tqpair(0x1a35ec0): expected_datao=0, payload_size=8192 00:25:15.661 [2024-07-25 07:31:22.973408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973713] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973716] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.661 [2024-07-25 07:31:22.973728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.661 [2024-07-25 07:31:22.973731] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973734] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=512, cccid=4 00:25:15.661 [2024-07-25 07:31:22.973739] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab9440) on tqpair(0x1a35ec0): expected_datao=0, payload_size=512 00:25:15.661 [2024-07-25 07:31:22.973743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973749] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973753] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.661 [2024-07-25 07:31:22.973764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.661 [2024-07-25 07:31:22.973767] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973771] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=512, cccid=6 00:25:15.661 [2024-07-25 07:31:22.973775] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab9740) on tqpair(0x1a35ec0): expected_datao=0, payload_size=512 00:25:15.661 [2024-07-25 07:31:22.973779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973785] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973789] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.661 [2024-07-25 07:31:22.973800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.661 [2024-07-25 07:31:22.973803] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973807] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a35ec0): datao=0, datal=4096, cccid=7 00:25:15.661 [2024-07-25 07:31:22.973811] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab98c0) on tqpair(0x1a35ec0): expected_datao=0, payload_size=4096 00:25:15.661 [2024-07-25 07:31:22.973815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973822] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973825] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.973934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.973937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab95c0) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.973954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.973960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.973965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9440) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.973979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.973985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.973988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.973992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9740) on tqpair=0x1a35ec0 00:25:15.661 [2024-07-25 07:31:22.973999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.661 [2024-07-25 07:31:22.974004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.661 [2024-07-25 07:31:22.974008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.661 [2024-07-25 07:31:22.974011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab98c0) on tqpair=0x1a35ec0 00:25:15.661 ===================================================== 00:25:15.661 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.661 ===================================================== 00:25:15.661 Controller Capabilities/Features 00:25:15.661 ================================ 00:25:15.661 Vendor ID: 8086 00:25:15.661 Subsystem Vendor ID: 8086 00:25:15.661 Serial Number: SPDK00000000000001 00:25:15.661 Model Number: SPDK bdev Controller 00:25:15.661 Firmware Version: 24.09 00:25:15.661 Recommended Arb Burst: 6 00:25:15.661 IEEE OUI Identifier: e4 d2 5c 00:25:15.661 Multi-path I/O 00:25:15.661 May have multiple subsystem ports: Yes 00:25:15.661 May have multiple controllers: Yes 00:25:15.661 Associated with SR-IOV VF: No 00:25:15.661 Max Data Transfer Size: 131072 00:25:15.661 Max Number of Namespaces: 32 00:25:15.661 Max Number of I/O Queues: 127 00:25:15.661 NVMe Specification Version (VS): 1.3 00:25:15.661 NVMe Specification Version (Identify): 1.3 00:25:15.661 Maximum Queue Entries: 128 00:25:15.662 Contiguous Queues Required: Yes 00:25:15.662 Arbitration Mechanisms Supported 00:25:15.662 Weighted Round Robin: Not Supported 00:25:15.662 Vendor Specific: Not Supported 00:25:15.662 Reset Timeout: 15000 ms 00:25:15.662 Doorbell Stride: 4 bytes 00:25:15.662 NVM Subsystem Reset: Not Supported 00:25:15.662 Command Sets Supported 00:25:15.662 NVM Command Set: Supported 00:25:15.662 Boot Partition: Not Supported 00:25:15.662 Memory Page Size Minimum: 4096 bytes 00:25:15.662 Memory Page Size Maximum: 4096 bytes 00:25:15.662 Persistent Memory Region: Not Supported 00:25:15.662 Optional Asynchronous Events Supported 00:25:15.662 Namespace Attribute Notices: Supported 00:25:15.662 Firmware Activation Notices: Not Supported 00:25:15.662 ANA Change Notices: Not Supported 00:25:15.662 PLE Aggregate Log Change Notices: Not Supported 00:25:15.662 LBA Status Info Alert Notices: Not Supported 00:25:15.662 EGE Aggregate Log Change Notices: Not Supported 00:25:15.662 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.662 Zone Descriptor Change Notices: Not Supported 00:25:15.662 Discovery Log Change Notices: Not Supported 00:25:15.662 Controller Attributes 00:25:15.662 128-bit Host Identifier: Supported 00:25:15.662 Non-Operational Permissive Mode: Not Supported 00:25:15.662 NVM Sets: Not Supported 00:25:15.662 Read Recovery Levels: Not Supported 00:25:15.662 Endurance Groups: Not Supported 00:25:15.662 Predictable Latency Mode: Not Supported 00:25:15.662 Traffic Based Keep ALive: Not Supported 00:25:15.662 Namespace Granularity: Not Supported 00:25:15.662 SQ Associations: Not Supported 00:25:15.662 UUID List: Not Supported 00:25:15.662 Multi-Domain Subsystem: Not Supported 00:25:15.662 Fixed Capacity Management: Not Supported 00:25:15.662 Variable Capacity Management: Not Supported 00:25:15.662 Delete Endurance Group: Not Supported 00:25:15.662 Delete NVM Set: Not Supported 00:25:15.662 Extended LBA Formats Supported: Not Supported 00:25:15.662 Flexible Data Placement Supported: Not Supported 00:25:15.662 00:25:15.662 Controller Memory Buffer Support 00:25:15.662 ================================ 00:25:15.662 Supported: No 00:25:15.662 00:25:15.662 Persistent Memory Region Support 00:25:15.662 ================================ 00:25:15.662 Supported: No 00:25:15.662 00:25:15.662 Admin Command Set Attributes 00:25:15.662 ============================ 00:25:15.662 Security Send/Receive: Not Supported 00:25:15.662 Format NVM: Not Supported 00:25:15.662 Firmware Activate/Download: Not Supported 00:25:15.662 Namespace Management: Not Supported 00:25:15.662 Device Self-Test: Not Supported 00:25:15.662 Directives: Not Supported 00:25:15.662 NVMe-MI: Not Supported 00:25:15.662 Virtualization Management: Not Supported 00:25:15.662 Doorbell Buffer Config: Not Supported 00:25:15.662 Get LBA Status Capability: Not Supported 00:25:15.662 Command & Feature Lockdown Capability: Not Supported 00:25:15.662 Abort Command Limit: 4 00:25:15.662 Async Event Request Limit: 4 00:25:15.662 Number of Firmware Slots: N/A 00:25:15.662 Firmware Slot 1 Read-Only: N/A 00:25:15.662 Firmware Activation Without Reset: N/A 00:25:15.662 Multiple Update Detection Support: N/A 00:25:15.662 Firmware Update Granularity: No Information Provided 00:25:15.662 Per-Namespace SMART Log: No 00:25:15.662 Asymmetric Namespace Access Log Page: Not Supported 00:25:15.662 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:15.662 Command Effects Log Page: Supported 00:25:15.662 Get Log Page Extended Data: Supported 00:25:15.662 Telemetry Log Pages: Not Supported 00:25:15.662 Persistent Event Log Pages: Not Supported 00:25:15.662 Supported Log Pages Log Page: May Support 00:25:15.662 Commands Supported & Effects Log Page: Not Supported 00:25:15.662 Feature Identifiers & Effects Log Page:May Support 00:25:15.662 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.662 Data Area 4 for Telemetry Log: Not Supported 00:25:15.662 Error Log Page Entries Supported: 128 00:25:15.662 Keep Alive: Supported 00:25:15.662 Keep Alive Granularity: 10000 ms 00:25:15.662 00:25:15.662 NVM Command Set Attributes 00:25:15.662 ========================== 00:25:15.662 Submission Queue Entry Size 00:25:15.662 Max: 64 00:25:15.662 Min: 64 00:25:15.662 Completion Queue Entry Size 00:25:15.662 Max: 16 00:25:15.662 Min: 16 00:25:15.662 Number of Namespaces: 32 00:25:15.662 Compare Command: Supported 00:25:15.662 Write Uncorrectable Command: Not Supported 00:25:15.662 Dataset Management Command: Supported 00:25:15.662 Write Zeroes Command: Supported 00:25:15.662 Set Features Save Field: Not Supported 00:25:15.662 Reservations: Supported 00:25:15.662 Timestamp: Not Supported 00:25:15.662 Copy: Supported 00:25:15.662 Volatile Write Cache: Present 00:25:15.662 Atomic Write Unit (Normal): 1 00:25:15.662 Atomic Write Unit (PFail): 1 00:25:15.662 Atomic Compare & Write Unit: 1 00:25:15.662 Fused Compare & Write: Supported 00:25:15.662 Scatter-Gather List 00:25:15.662 SGL Command Set: Supported 00:25:15.662 SGL Keyed: Supported 00:25:15.662 SGL Bit Bucket Descriptor: Not Supported 00:25:15.662 SGL Metadata Pointer: Not Supported 00:25:15.662 Oversized SGL: Not Supported 00:25:15.662 SGL Metadata Address: Not Supported 00:25:15.662 SGL Offset: Supported 00:25:15.662 Transport SGL Data Block: Not Supported 00:25:15.662 Replay Protected Memory Block: Not Supported 00:25:15.662 00:25:15.662 Firmware Slot Information 00:25:15.662 ========================= 00:25:15.662 Active slot: 1 00:25:15.662 Slot 1 Firmware Revision: 24.09 00:25:15.662 00:25:15.662 00:25:15.662 Commands Supported and Effects 00:25:15.662 ============================== 00:25:15.662 Admin Commands 00:25:15.662 -------------- 00:25:15.662 Get Log Page (02h): Supported 00:25:15.662 Identify (06h): Supported 00:25:15.662 Abort (08h): Supported 00:25:15.662 Set Features (09h): Supported 00:25:15.662 Get Features (0Ah): Supported 00:25:15.662 Asynchronous Event Request (0Ch): Supported 00:25:15.662 Keep Alive (18h): Supported 00:25:15.662 I/O Commands 00:25:15.662 ------------ 00:25:15.662 Flush (00h): Supported LBA-Change 00:25:15.662 Write (01h): Supported LBA-Change 00:25:15.662 Read (02h): Supported 00:25:15.662 Compare (05h): Supported 00:25:15.662 Write Zeroes (08h): Supported LBA-Change 00:25:15.662 Dataset Management (09h): Supported LBA-Change 00:25:15.662 Copy (19h): Supported LBA-Change 00:25:15.662 00:25:15.662 Error Log 00:25:15.662 ========= 00:25:15.662 00:25:15.662 Arbitration 00:25:15.662 =========== 00:25:15.662 Arbitration Burst: 1 00:25:15.662 00:25:15.662 Power Management 00:25:15.662 ================ 00:25:15.662 Number of Power States: 1 00:25:15.662 Current Power State: Power State #0 00:25:15.662 Power State #0: 00:25:15.662 Max Power: 0.00 W 00:25:15.662 Non-Operational State: Operational 00:25:15.662 Entry Latency: Not Reported 00:25:15.662 Exit Latency: Not Reported 00:25:15.662 Relative Read Throughput: 0 00:25:15.662 Relative Read Latency: 0 00:25:15.662 Relative Write Throughput: 0 00:25:15.662 Relative Write Latency: 0 00:25:15.662 Idle Power: Not Reported 00:25:15.662 Active Power: Not Reported 00:25:15.662 Non-Operational Permissive Mode: Not Supported 00:25:15.662 00:25:15.662 Health Information 00:25:15.662 ================== 00:25:15.662 Critical Warnings: 00:25:15.662 Available Spare Space: OK 00:25:15.662 Temperature: OK 00:25:15.662 Device Reliability: OK 00:25:15.662 Read Only: No 00:25:15.662 Volatile Memory Backup: OK 00:25:15.662 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:15.662 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:15.662 Available Spare: 0% 00:25:15.662 Available Spare Threshold: 0% 00:25:15.662 Life Percentage Used:[2024-07-25 07:31:22.974111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.662 [2024-07-25 07:31:22.974116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a35ec0) 00:25:15.662 [2024-07-25 07:31:22.974123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.662 [2024-07-25 07:31:22.974136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab98c0, cid 7, qid 0 00:25:15.662 [2024-07-25 07:31:22.974352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.662 [2024-07-25 07:31:22.974359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.662 [2024-07-25 07:31:22.974362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.662 [2024-07-25 07:31:22.974366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab98c0) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.974396] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:15.663 [2024-07-25 07:31:22.974405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8e40) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.974411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.663 [2024-07-25 07:31:22.974416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab8fc0) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.974421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.663 [2024-07-25 07:31:22.974426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab9140) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.974430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.663 [2024-07-25 07:31:22.974435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab92c0) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.974439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.663 [2024-07-25 07:31:22.974447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a35ec0) 00:25:15.663 [2024-07-25 07:31:22.974462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.663 [2024-07-25 07:31:22.974475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab92c0, cid 3, qid 0 00:25:15.663 [2024-07-25 07:31:22.974676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.663 [2024-07-25 07:31:22.974682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.663 [2024-07-25 07:31:22.974686] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab92c0) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.974699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a35ec0) 00:25:15.663 [2024-07-25 07:31:22.974713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.663 [2024-07-25 07:31:22.974726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab92c0, cid 3, qid 0 00:25:15.663 [2024-07-25 07:31:22.974954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.663 [2024-07-25 07:31:22.974960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.663 [2024-07-25 07:31:22.974964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab92c0) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.974972] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:15.663 [2024-07-25 07:31:22.974977] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:15.663 [2024-07-25 07:31:22.974986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.974993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a35ec0) 00:25:15.663 [2024-07-25 07:31:22.975000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.663 [2024-07-25 07:31:22.975010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab92c0, cid 3, qid 0 00:25:15.663 [2024-07-25 07:31:22.979209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.663 [2024-07-25 07:31:22.979219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.663 [2024-07-25 07:31:22.979222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.979226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab92c0) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.979237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.979241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.979245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a35ec0) 00:25:15.663 [2024-07-25 07:31:22.979252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.663 [2024-07-25 07:31:22.979264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab92c0, cid 3, qid 0 00:25:15.663 [2024-07-25 07:31:22.979474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.663 [2024-07-25 07:31:22.979480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.663 [2024-07-25 07:31:22.979484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.663 [2024-07-25 07:31:22.979487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab92c0) on tqpair=0x1a35ec0 00:25:15.663 [2024-07-25 07:31:22.979495] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:25:15.663 0% 00:25:15.663 Data Units Read: 0 00:25:15.663 Data Units Written: 0 00:25:15.663 Host Read Commands: 0 00:25:15.663 Host Write Commands: 0 00:25:15.663 Controller Busy Time: 0 minutes 00:25:15.663 Power Cycles: 0 00:25:15.663 Power On Hours: 0 hours 00:25:15.663 Unsafe Shutdowns: 0 00:25:15.663 Unrecoverable Media Errors: 0 00:25:15.663 Lifetime Error Log Entries: 0 00:25:15.663 Warning Temperature Time: 0 minutes 00:25:15.663 Critical Temperature Time: 0 minutes 00:25:15.663 00:25:15.663 Number of Queues 00:25:15.663 ================ 00:25:15.663 Number of I/O Submission Queues: 127 00:25:15.663 Number of I/O Completion Queues: 127 00:25:15.663 00:25:15.663 Active Namespaces 00:25:15.663 ================= 00:25:15.663 Namespace ID:1 00:25:15.663 Error Recovery Timeout: Unlimited 00:25:15.663 Command Set Identifier: NVM (00h) 00:25:15.663 Deallocate: Supported 00:25:15.663 Deallocated/Unwritten Error: Not Supported 00:25:15.663 Deallocated Read Value: Unknown 00:25:15.663 Deallocate in Write Zeroes: Not Supported 00:25:15.663 Deallocated Guard Field: 0xFFFF 00:25:15.663 Flush: Supported 00:25:15.663 Reservation: Supported 00:25:15.663 Namespace Sharing Capabilities: Multiple Controllers 00:25:15.663 Size (in LBAs): 131072 (0GiB) 00:25:15.663 Capacity (in LBAs): 131072 (0GiB) 00:25:15.663 Utilization (in LBAs): 131072 (0GiB) 00:25:15.663 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:15.663 EUI64: ABCDEF0123456789 00:25:15.663 UUID: 785d71fb-f812-4a79-8388-a060d5328508 00:25:15.663 Thin Provisioning: Not Supported 00:25:15.663 Per-NS Atomic Units: Yes 00:25:15.663 Atomic Boundary Size (Normal): 0 00:25:15.663 Atomic Boundary Size (PFail): 0 00:25:15.663 Atomic Boundary Offset: 0 00:25:15.663 Maximum Single Source Range Length: 65535 00:25:15.663 Maximum Copy Length: 65535 00:25:15.663 Maximum Source Range Count: 1 00:25:15.663 NGUID/EUI64 Never Reused: No 00:25:15.663 Namespace Write Protected: No 00:25:15.663 Number of LBA Formats: 1 00:25:15.663 Current LBA Format: LBA Format #00 00:25:15.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:15.663 00:25:15.663 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:15.663 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.663 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.663 07:31:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.663 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.925 rmmod nvme_tcp 00:25:15.925 rmmod nvme_fabrics 00:25:15.925 rmmod nvme_keyring 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 192154 ']' 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 192154 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 192154 ']' 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 192154 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 192154 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 192154' 00:25:15.925 killing process with pid 192154 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 192154 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 192154 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.925 07:31:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:18.521 00:25:18.521 real 0m11.203s 00:25:18.521 user 0m7.781s 00:25:18.521 sys 0m5.893s 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.521 ************************************ 00:25:18.521 END TEST nvmf_identify 00:25:18.521 ************************************ 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.521 ************************************ 00:25:18.521 START TEST nvmf_perf 00:25:18.521 ************************************ 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:18.521 * Looking for test storage... 00:25:18.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.521 07:31:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:26.669 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:26.670 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:26.670 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:26.670 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:26.670 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:25:26.670 00:25:26.670 --- 10.0.0.2 ping statistics --- 00:25:26.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.670 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:25:26.670 00:25:26.670 --- 10.0.0.1 ping statistics --- 00:25:26.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.670 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=196662 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 196662 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 196662 ']' 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.670 07:31:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:26.670 [2024-07-25 07:31:32.963448] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:25:26.670 [2024-07-25 07:31:32.963515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.671 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.671 [2024-07-25 07:31:33.038625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:26.671 [2024-07-25 07:31:33.114603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.671 [2024-07-25 07:31:33.114644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.671 [2024-07-25 07:31:33.114651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.671 [2024-07-25 07:31:33.114658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.671 [2024-07-25 07:31:33.114663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.671 [2024-07-25 07:31:33.115001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.671 [2024-07-25 07:31:33.115116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.671 [2024-07-25 07:31:33.115263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.671 [2024-07-25 07:31:33.115262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:26.671 07:31:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:26.931 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:26.932 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:27.192 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:27.192 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:27.452 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:27.452 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:27.452 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:27.452 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:27.452 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:27.452 [2024-07-25 07:31:34.769483] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.452 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.713 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:27.713 07:31:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:27.974 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:27.974 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:27.974 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.235 [2024-07-25 07:31:35.448038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.235 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:28.496 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:28.496 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:28.496 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:28.496 07:31:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:29.881 Initializing NVMe Controllers 00:25:29.881 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:29.881 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:29.881 Initialization complete. Launching workers. 00:25:29.881 ======================================================== 00:25:29.881 Latency(us) 00:25:29.881 Device Information : IOPS MiB/s Average min max 00:25:29.881 PCIE (0000:65:00.0) NSID 1 from core 0: 79474.58 310.45 402.14 13.34 4705.45 00:25:29.881 ======================================================== 00:25:29.881 Total : 79474.58 310.45 402.14 13.34 4705.45 00:25:29.881 00:25:29.881 07:31:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:29.881 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.823 Initializing NVMe Controllers 00:25:30.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.823 Initialization complete. Launching workers. 00:25:30.823 ======================================================== 00:25:30.823 Latency(us) 00:25:30.823 Device Information : IOPS MiB/s Average min max 00:25:30.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.75 0.37 10541.51 555.12 46451.47 00:25:30.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.88 0.18 21501.70 5518.35 47905.10 00:25:30.823 ======================================================== 00:25:30.823 Total : 142.62 0.56 14143.81 555.12 47905.10 00:25:30.823 00:25:30.823 07:31:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:31.083 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.026 Initializing NVMe Controllers 00:25:32.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:32.026 Initialization complete. Launching workers. 00:25:32.026 ======================================================== 00:25:32.026 Latency(us) 00:25:32.026 Device Information : IOPS MiB/s Average min max 00:25:32.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8285.45 32.37 3866.38 758.84 12160.55 00:25:32.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3803.75 14.86 8470.00 6934.65 27635.08 00:25:32.026 ======================================================== 00:25:32.026 Total : 12089.20 47.22 5314.87 758.84 27635.08 00:25:32.026 00:25:32.026 07:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:32.026 07:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:32.026 07:31:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.287 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.840 Initializing NVMe Controllers 00:25:34.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.840 Controller IO queue size 128, less than required. 00:25:34.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.840 Controller IO queue size 128, less than required. 00:25:34.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:34.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:34.840 Initialization complete. Launching workers. 00:25:34.840 ======================================================== 00:25:34.840 Latency(us) 00:25:34.840 Device Information : IOPS MiB/s Average min max 00:25:34.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 940.90 235.22 140520.95 80032.78 180607.19 00:25:34.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.94 148.73 222761.14 70299.00 328222.90 00:25:34.840 ======================================================== 00:25:34.840 Total : 1535.83 383.96 172378.32 70299.00 328222.90 00:25:34.840 00:25:34.840 07:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:34.840 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.100 No valid NVMe controllers or AIO or URING devices found 00:25:35.100 Initializing NVMe Controllers 00:25:35.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.100 Controller IO queue size 128, less than required. 00:25:35.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.100 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:35.100 Controller IO queue size 128, less than required. 00:25:35.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.100 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:35.100 WARNING: Some requested NVMe devices were skipped 00:25:35.100 07:31:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:35.100 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.647 Initializing NVMe Controllers 00:25:37.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.647 Controller IO queue size 128, less than required. 00:25:37.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:37.647 Controller IO queue size 128, less than required. 00:25:37.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:37.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:37.647 Initialization complete. Launching workers. 00:25:37.647 00:25:37.647 ==================== 00:25:37.648 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:37.648 TCP transport: 00:25:37.648 polls: 47599 00:25:37.648 idle_polls: 18551 00:25:37.648 sock_completions: 29048 00:25:37.648 nvme_completions: 3611 00:25:37.648 submitted_requests: 5430 00:25:37.648 queued_requests: 1 00:25:37.648 00:25:37.648 ==================== 00:25:37.648 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:37.648 TCP transport: 00:25:37.648 polls: 46370 00:25:37.648 idle_polls: 16549 00:25:37.648 sock_completions: 29821 00:25:37.648 nvme_completions: 3657 00:25:37.648 submitted_requests: 5484 00:25:37.648 queued_requests: 1 00:25:37.648 ======================================================== 00:25:37.648 Latency(us) 00:25:37.648 Device Information : IOPS MiB/s Average min max 00:25:37.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 900.69 225.17 149725.13 78296.24 246679.80 00:25:37.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 912.17 228.04 143979.72 84880.50 210334.02 00:25:37.648 ======================================================== 00:25:37.648 Total : 1812.86 453.22 146834.24 78296.24 246679.80 00:25:37.648 00:25:37.648 07:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:37.648 07:31:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:37.909 rmmod nvme_tcp 00:25:37.909 rmmod nvme_fabrics 00:25:37.909 rmmod nvme_keyring 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 196662 ']' 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 196662 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 196662 ']' 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 196662 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 196662 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 196662' 00:25:37.909 killing process with pid 196662 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 196662 00:25:37.909 07:31:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 196662 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.824 07:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.443 00:25:42.443 real 0m23.812s 00:25:42.443 user 0m57.833s 00:25:42.443 sys 0m7.810s 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:42.443 ************************************ 00:25:42.443 END TEST nvmf_perf 00:25:42.443 ************************************ 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.443 ************************************ 00:25:42.443 START TEST nvmf_fio_host 00:25:42.443 ************************************ 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:42.443 * Looking for test storage... 00:25:42.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.443 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.444 07:31:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.041 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.041 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.041 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.041 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.041 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.041 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.041 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:49.042 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:49.042 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:49.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:49.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.042 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.304 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.304 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.304 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.304 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.304 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.304 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.304 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:25:49.565 00:25:49.565 --- 10.0.0.2 ping statistics --- 00:25:49.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.565 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:25:49.565 00:25:49.565 --- 10.0.0.1 ping statistics --- 00:25:49.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.565 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=203494 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 203494 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 203494 ']' 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.565 07:31:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.565 [2024-07-25 07:31:56.826529] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:25:49.565 [2024-07-25 07:31:56.826592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.565 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.565 [2024-07-25 07:31:56.898685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.826 [2024-07-25 07:31:56.975531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.826 [2024-07-25 07:31:56.975566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.826 [2024-07-25 07:31:56.975574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.826 [2024-07-25 07:31:56.975580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.826 [2024-07-25 07:31:56.975586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.826 [2024-07-25 07:31:56.975663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.826 [2024-07-25 07:31:56.975797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.826 [2024-07-25 07:31:56.975840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.826 [2024-07-25 07:31:56.975841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.397 07:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.397 07:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:25:50.397 07:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.397 [2024-07-25 07:31:57.757556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.656 07:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:50.656 07:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:50.656 07:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.657 07:31:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:50.657 Malloc1 00:25:50.657 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.917 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:51.176 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.176 [2024-07-25 07:31:58.475721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.176 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:51.438 07:31:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:51.699 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:51.699 fio-3.35 00:25:51.699 Starting 1 thread 00:25:51.699 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.282 00:25:54.282 test: (groupid=0, jobs=1): err= 0: pid=204264: Thu Jul 25 07:32:01 2024 00:25:54.282 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2004msec) 00:25:54.282 slat (usec): min=2, max=283, avg= 2.17, stdev= 2.36 00:25:54.282 clat (usec): min=3045, max=12727, avg=5368.47, stdev=1066.93 00:25:54.282 lat (usec): min=3047, max=12740, avg=5370.64, stdev=1067.22 00:25:54.282 clat percentiles (usec): 00:25:54.282 | 1.00th=[ 3818], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4686], 00:25:54.282 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5276], 00:25:54.282 | 70.00th=[ 5473], 80.00th=[ 5800], 90.00th=[ 6587], 95.00th=[ 7373], 00:25:54.282 | 99.00th=[ 9634], 99.50th=[10683], 99.90th=[12387], 99.95th=[12518], 00:25:54.282 | 99.99th=[12780] 00:25:54.282 bw ( KiB/s): min=52456, max=55752, per=99.92%, avg=54778.00, stdev=1557.33, samples=4 00:25:54.282 iops : min=13114, max=13938, avg=13694.50, stdev=389.33, samples=4 00:25:54.282 write: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2004msec); 0 zone resets 00:25:54.282 slat (usec): min=2, max=262, avg= 2.23, stdev= 1.75 00:25:54.282 clat (usec): min=2008, max=10693, avg=3915.35, stdev=592.57 00:25:54.282 lat (usec): min=2010, max=10697, avg=3917.58, stdev=592.75 00:25:54.282 clat percentiles (usec): 00:25:54.282 | 1.00th=[ 2540], 5.00th=[ 2933], 10.00th=[ 3195], 20.00th=[ 3490], 00:25:54.282 | 30.00th=[ 3687], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4047], 00:25:54.282 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4752], 00:25:54.282 | 99.00th=[ 5669], 99.50th=[ 6259], 99.90th=[ 8094], 99.95th=[ 8979], 00:25:54.282 | 99.99th=[10028] 00:25:54.282 bw ( KiB/s): min=52824, max=55472, per=100.00%, avg=54736.00, stdev=1279.62, samples=4 00:25:54.282 iops : min=13206, max=13868, avg=13684.00, stdev=319.90, samples=4 00:25:54.282 lat (msec) : 4=28.79%, 10=70.81%, 20=0.41% 00:25:54.282 cpu : usr=70.74%, sys=22.42%, ctx=18, majf=0, minf=6 00:25:54.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:54.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:54.282 issued rwts: total=27466,27418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:54.282 00:25:54.282 Run status group 0 (all jobs): 00:25:54.282 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (113MB), run=2004-2004msec 00:25:54.282 WRITE: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2004-2004msec 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:54.282 07:32:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:54.544 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:54.544 fio-3.35 00:25:54.544 Starting 1 thread 00:25:54.544 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.091 00:25:57.091 test: (groupid=0, jobs=1): err= 0: pid=204789: Thu Jul 25 07:32:03 2024 00:25:57.091 read: IOPS=8364, BW=131MiB/s (137MB/s)(262MiB/2005msec) 00:25:57.091 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.43 00:25:57.091 clat (usec): min=2618, max=30076, avg=9262.95, stdev=2577.41 00:25:57.091 lat (usec): min=2621, max=30080, avg=9266.57, stdev=2577.70 00:25:57.091 clat percentiles (usec): 00:25:57.091 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 6980], 00:25:57.091 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 9110], 60.00th=[ 9765], 00:25:57.091 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12518], 95.00th=[13960], 00:25:57.091 | 99.00th=[16712], 99.50th=[18220], 99.90th=[20317], 99.95th=[20579], 00:25:57.091 | 99.99th=[26608] 00:25:57.091 bw ( KiB/s): min=59616, max=84640, per=52.41%, avg=70136.00, stdev=12225.88, samples=4 00:25:57.091 iops : min= 3726, max= 5290, avg=4383.50, stdev=764.12, samples=4 00:25:57.091 write: IOPS=5042, BW=78.8MiB/s (82.6MB/s)(143MiB/1815msec); 0 zone resets 00:25:57.091 slat (usec): min=39, max=332, avg=40.97, stdev= 6.96 00:25:57.091 clat (usec): min=3262, max=22078, avg=9968.15, stdev=2138.91 00:25:57.091 lat (usec): min=3302, max=22123, avg=10009.12, stdev=2141.18 00:25:57.091 clat percentiles (usec): 00:25:57.091 | 1.00th=[ 6063], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[ 8291], 00:25:57.091 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:25:57.091 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12387], 95.00th=[13304], 00:25:57.091 | 99.00th=[18482], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:25:57.091 | 99.99th=[22152] 00:25:57.091 bw ( KiB/s): min=61056, max=87936, per=90.16%, avg=72744.00, stdev=12936.48, samples=4 00:25:57.091 iops : min= 3816, max= 5496, avg=4546.50, stdev=808.53, samples=4 00:25:57.091 lat (msec) : 4=0.27%, 10=61.79%, 20=37.50%, 50=0.44% 00:25:57.091 cpu : usr=80.89%, sys=14.32%, ctx=20, majf=0, minf=19 00:25:57.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:57.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:57.091 issued rwts: total=16771,9153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:57.091 00:25:57.091 Run status group 0 (all jobs): 00:25:57.091 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=262MiB (275MB), run=2005-2005msec 00:25:57.091 WRITE: bw=78.8MiB/s (82.6MB/s), 78.8MiB/s-78.8MiB/s (82.6MB/s-82.6MB/s), io=143MiB (150MB), run=1815-1815msec 00:25:57.091 07:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:57.091 rmmod nvme_tcp 00:25:57.091 rmmod nvme_fabrics 00:25:57.091 rmmod nvme_keyring 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 203494 ']' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 203494 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 203494 ']' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 203494 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 203494 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 203494' 00:25:57.091 killing process with pid 203494 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 203494 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 203494 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.091 07:32:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:59.639 00:25:59.639 real 0m17.182s 00:25:59.639 user 1m5.746s 00:25:59.639 sys 0m7.291s 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.639 ************************************ 00:25:59.639 END TEST nvmf_fio_host 00:25:59.639 ************************************ 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.639 ************************************ 00:25:59.639 START TEST nvmf_failover 00:25:59.639 ************************************ 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:59.639 * Looking for test storage... 00:25:59.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:59.639 07:32:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:06.236 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:06.236 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.236 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:06.237 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:06.237 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.237 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:06.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:26:06.531 00:26:06.531 --- 10.0.0.2 ping statistics --- 00:26:06.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.531 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:26:06.531 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:26:06.531 00:26:06.531 --- 10.0.0.1 ping statistics --- 00:26:06.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.531 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=209420 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 209420 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 209420 ']' 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.792 07:32:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.792 [2024-07-25 07:32:14.007346] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:26:06.792 [2024-07-25 07:32:14.007415] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.792 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.792 [2024-07-25 07:32:14.097685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:07.053 [2024-07-25 07:32:14.191403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.053 [2024-07-25 07:32:14.191465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.053 [2024-07-25 07:32:14.191472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.053 [2024-07-25 07:32:14.191479] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.053 [2024-07-25 07:32:14.191485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.053 [2024-07-25 07:32:14.191617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.053 [2024-07-25 07:32:14.191787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.053 [2024-07-25 07:32:14.191788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.626 07:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:07.626 07:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:26:07.626 07:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:07.626 07:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:07.626 07:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:07.626 07:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.626 07:32:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:07.626 [2024-07-25 07:32:14.971115] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.887 07:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:07.887 Malloc0 00:26:07.887 07:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.148 07:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:08.409 07:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.409 [2024-07-25 07:32:15.657148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.409 07:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:08.671 [2024-07-25 07:32:15.821567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:08.671 07:32:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:08.671 [2024-07-25 07:32:15.982077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=209845 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 209845 /var/tmp/bdevperf.sock 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 209845 ']' 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.671 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:09.614 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:09.614 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:26:09.614 07:32:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.876 NVMe0n1 00:26:09.876 07:32:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.138 00:26:10.138 07:32:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.138 07:32:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=210125 00:26:10.138 07:32:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:11.082 07:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.345 [2024-07-25 07:32:18.583032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 [2024-07-25 07:32:18.583321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74da0 is same with the state(5) to be set 00:26:11.345 07:32:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:14.654 07:32:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:14.654 00:26:14.916 07:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.916 [2024-07-25 07:32:22.191263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 [2024-07-25 07:32:22.191436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75b70 is same with the state(5) to be set 00:26:14.916 07:32:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:18.217 07:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.217 [2024-07-25 07:32:25.367817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.217 07:32:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:19.159 07:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:19.420 [2024-07-25 07:32:26.548280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 [2024-07-25 07:32:26.548411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa76910 is same with the state(5) to be set 00:26:19.420 07:32:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 210125 00:26:26.020 0 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 209845 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 209845 ']' 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 209845 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 209845 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 209845' 00:26:26.020 killing process with pid 209845 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 209845 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 209845 00:26:26.020 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:26.020 [2024-07-25 07:32:16.060785] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:26:26.020 [2024-07-25 07:32:16.060844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209845 ] 00:26:26.020 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.020 [2024-07-25 07:32:16.119929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.020 [2024-07-25 07:32:16.185134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.020 Running I/O for 15 seconds... 00:26:26.020 [2024-07-25 07:32:18.583899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.583934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.583950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.583958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.583969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.583976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.583985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.583992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.020 [2024-07-25 07:32:18.584264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.020 [2024-07-25 07:32:18.584274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.021 [2024-07-25 07:32:18.584832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.021 [2024-07-25 07:32:18.584890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.021 [2024-07-25 07:32:18.584898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.584907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.584914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.584923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.584931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.584942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.584949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.584958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.584965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.584974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.584982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.584991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.584999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.585015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.585032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.585048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.585064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.585081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.585097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.022 [2024-07-25 07:32:18.585113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.022 [2024-07-25 07:32:18.585536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.022 [2024-07-25 07:32:18.585543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:18.585756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.585985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.585994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.586001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.586010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.023 [2024-07-25 07:32:18.586017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.586037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.023 [2024-07-25 07:32:18.586043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.023 [2024-07-25 07:32:18.586050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102888 len:8 PRP1 0x0 PRP2 0x0 00:26:26.023 [2024-07-25 07:32:18.586057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.586095] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f165d0 was disconnected and freed. reset controller. 00:26:26.023 [2024-07-25 07:32:18.586105] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:26.023 [2024-07-25 07:32:18.586124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.023 [2024-07-25 07:32:18.586132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.586141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.023 [2024-07-25 07:32:18.586148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.586156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.023 [2024-07-25 07:32:18.586164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.586172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.023 [2024-07-25 07:32:18.586179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:18.586186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.023 [2024-07-25 07:32:18.589781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.023 [2024-07-25 07:32:18.589806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeff70 (9): Bad file descriptor 00:26:26.023 [2024-07-25 07:32:18.677023] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:26.023 [2024-07-25 07:32:22.192210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.023 [2024-07-25 07:32:22.192250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.023 [2024-07-25 07:32:22.192264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.024 [2024-07-25 07:32:22.192784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.024 [2024-07-25 07:32:22.192793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.192802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.192984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.192994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.025 [2024-07-25 07:32:22.193184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.025 [2024-07-25 07:32:22.193428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.025 [2024-07-25 07:32:22.193435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.193991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.193997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.194007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.194014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.194024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.194032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.194042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.194049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.194058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.194065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.194075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.194082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.194092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.026 [2024-07-25 07:32:22.194099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.026 [2024-07-25 07:32:22.194108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:22.194254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57960 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57968 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57976 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57984 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57992 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58000 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58008 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58016 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.027 [2024-07-25 07:32:22.194495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.027 [2024-07-25 07:32:22.194502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57384 len:8 PRP1 0x0 PRP2 0x0 00:26:26.027 [2024-07-25 07:32:22.194509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194542] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f1ecc0 was disconnected and freed. reset controller. 00:26:26.027 [2024-07-25 07:32:22.194551] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:26.027 [2024-07-25 07:32:22.194570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.027 [2024-07-25 07:32:22.194578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.027 [2024-07-25 07:32:22.194594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.027 [2024-07-25 07:32:22.194609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.027 [2024-07-25 07:32:22.194624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:22.194631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.027 [2024-07-25 07:32:22.194665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeff70 (9): Bad file descriptor 00:26:26.027 [2024-07-25 07:32:22.198229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.027 [2024-07-25 07:32:22.228255] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:26.027 [2024-07-25 07:32:26.549159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.027 [2024-07-25 07:32:26.549302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.027 [2024-07-25 07:32:26.549346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.027 [2024-07-25 07:32:26.549353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.028 [2024-07-25 07:32:26.549439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.028 [2024-07-25 07:32:26.549455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.028 [2024-07-25 07:32:26.549740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.549986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.549996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.550005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.028 [2024-07-25 07:32:26.550017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.028 [2024-07-25 07:32:26.550026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.029 [2024-07-25 07:32:26.550618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.029 [2024-07-25 07:32:26.550625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.550984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.550992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.030 [2024-07-25 07:32:26.551142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.030 [2024-07-25 07:32:26.551295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.030 [2024-07-25 07:32:26.551305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.031 [2024-07-25 07:32:26.551312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.031 [2024-07-25 07:32:26.551329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.031 [2024-07-25 07:32:26.551346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.031 [2024-07-25 07:32:26.551362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.031 [2024-07-25 07:32:26.551378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.031 [2024-07-25 07:32:26.551395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:26.031 [2024-07-25 07:32:26.551412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:26.031 [2024-07-25 07:32:26.551438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:26.031 [2024-07-25 07:32:26.551444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:26:26.031 [2024-07-25 07:32:26.551452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551491] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f20480 was disconnected and freed. reset controller. 00:26:26.031 [2024-07-25 07:32:26.551500] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:26.031 [2024-07-25 07:32:26.551519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.031 [2024-07-25 07:32:26.551527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.031 [2024-07-25 07:32:26.551543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.031 [2024-07-25 07:32:26.551559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.031 [2024-07-25 07:32:26.551574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.031 [2024-07-25 07:32:26.551582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.031 [2024-07-25 07:32:26.555180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.031 [2024-07-25 07:32:26.555211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eeff70 (9): Bad file descriptor 00:26:26.031 [2024-07-25 07:32:26.719857] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:26.031 00:26:26.031 Latency(us) 00:26:26.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.031 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:26.031 Verification LBA range: start 0x0 length 0x4000 00:26:26.031 NVMe0n1 : 15.00 11578.32 45.23 707.32 0.00 10391.02 1058.13 22719.15 00:26:26.031 =================================================================================================================== 00:26:26.031 Total : 11578.32 45.23 707.32 0.00 10391.02 1058.13 22719.15 00:26:26.031 Received shutdown signal, test time was about 15.000000 seconds 00:26:26.031 00:26:26.031 Latency(us) 00:26:26.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.031 =================================================================================================================== 00:26:26.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=213136 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 213136 /var/tmp/bdevperf.sock 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 213136 ']' 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.031 07:32:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:26.292 07:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.292 07:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:26:26.292 07:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:26.553 [2024-07-25 07:32:33.722917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:26.553 07:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:26.553 [2024-07-25 07:32:33.895316] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:26.815 07:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:26.815 NVMe0n1 00:26:26.815 07:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:27.075 00:26:27.076 07:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:27.679 00:26:27.679 07:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:27.679 07:32:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:27.680 07:32:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:27.940 07:32:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:31.243 07:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:31.243 07:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:31.243 07:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:31.243 07:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=214155 00:26:31.243 07:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 214155 00:26:32.186 0 00:26:32.186 07:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:32.186 [2024-07-25 07:32:32.808137] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:26:32.186 [2024-07-25 07:32:32.808192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213136 ] 00:26:32.186 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.186 [2024-07-25 07:32:32.866720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.186 [2024-07-25 07:32:32.929496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.186 [2024-07-25 07:32:35.131952] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:32.186 [2024-07-25 07:32:35.132001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.186 [2024-07-25 07:32:35.132013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.186 [2024-07-25 07:32:35.132023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.186 [2024-07-25 07:32:35.132031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.186 [2024-07-25 07:32:35.132039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.186 [2024-07-25 07:32:35.132046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.186 [2024-07-25 07:32:35.132055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.186 [2024-07-25 07:32:35.132062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.186 [2024-07-25 07:32:35.132069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.186 [2024-07-25 07:32:35.132098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.186 [2024-07-25 07:32:35.132114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbdf70 (9): Bad file descriptor 00:26:32.186 [2024-07-25 07:32:35.138523] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:32.186 Running I/O for 1 seconds... 00:26:32.186 00:26:32.186 Latency(us) 00:26:32.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.186 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:32.186 Verification LBA range: start 0x0 length 0x4000 00:26:32.186 NVMe0n1 : 1.00 11465.42 44.79 0.00 0.00 11110.22 1747.63 17367.04 00:26:32.186 =================================================================================================================== 00:26:32.186 Total : 11465.42 44.79 0.00 0.00 11110.22 1747.63 17367.04 00:26:32.186 07:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:32.186 07:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:32.447 07:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:32.447 07:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:32.447 07:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:32.708 07:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:32.969 07:32:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 213136 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 213136 ']' 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 213136 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 213136 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 213136' 00:26:36.270 killing process with pid 213136 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 213136 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 213136 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:36.270 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.530 rmmod nvme_tcp 00:26:36.530 rmmod nvme_fabrics 00:26:36.530 rmmod nvme_keyring 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 209420 ']' 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 209420 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 209420 ']' 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 209420 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 209420 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 209420' 00:26:36.530 killing process with pid 209420 00:26:36.530 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 209420 00:26:36.531 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 209420 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.791 07:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.732 07:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.732 00:26:38.732 real 0m39.396s 00:26:38.732 user 2m1.670s 00:26:38.732 sys 0m7.984s 00:26:38.732 07:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.732 07:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:38.732 ************************************ 00:26:38.732 END TEST nvmf_failover 00:26:38.732 ************************************ 00:26:38.732 07:32:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:38.732 07:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:38.732 07:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.732 07:32:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.732 ************************************ 00:26:38.732 START TEST nvmf_host_discovery 00:26:38.732 ************************************ 00:26:38.732 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:38.994 * Looking for test storage... 00:26:38.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.994 07:32:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:47.142 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:47.142 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.142 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:47.142 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:47.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:47.143 00:26:47.143 --- 10.0.0.2 ping statistics --- 00:26:47.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.143 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:26:47.143 00:26:47.143 --- 10.0.0.1 ping statistics --- 00:26:47.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.143 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=219476 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 219476 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 219476 ']' 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.143 07:32:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.143 [2024-07-25 07:32:53.528472] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:26:47.143 [2024-07-25 07:32:53.528527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.143 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.143 [2024-07-25 07:32:53.614965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.143 [2024-07-25 07:32:53.704857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.143 [2024-07-25 07:32:53.704917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.143 [2024-07-25 07:32:53.704926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.143 [2024-07-25 07:32:53.704934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.143 [2024-07-25 07:32:53.704940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.143 [2024-07-25 07:32:53.704966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.143 [2024-07-25 07:32:54.356504] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.143 [2024-07-25 07:32:54.368777] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.143 null0 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.143 null1 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:47.143 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=219566 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 219566 /tmp/host.sock 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 219566 ']' 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:47.144 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.144 07:32:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.144 [2024-07-25 07:32:54.465707] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:26:47.144 [2024-07-25 07:32:54.465772] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219566 ] 00:26:47.144 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.405 [2024-07-25 07:32:54.529640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.405 [2024-07-25 07:32:54.604211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.977 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.238 [2024-07-25 07:32:55.579795] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.238 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.239 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:26:48.500 07:32:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:49.072 [2024-07-25 07:32:56.295331] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:49.072 [2024-07-25 07:32:56.295353] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:49.072 [2024-07-25 07:32:56.295367] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:49.072 [2024-07-25 07:32:56.384646] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:49.333 [2024-07-25 07:32:56.611644] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:49.333 [2024-07-25 07:32:56.611667] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:49.594 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:49.595 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:49.856 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:49.856 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:49.856 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.856 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.856 07:32:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.856 [2024-07-25 07:32:57.127775] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:49.856 [2024-07-25 07:32:57.129046] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:49.856 [2024-07-25 07:32:57.129073] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:49.856 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.857 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.857 [2024-07-25 07:32:57.220344] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:50.117 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.117 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.117 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:50.117 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:50.117 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:50.118 07:32:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:50.379 [2024-07-25 07:32:57.525774] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:50.379 [2024-07-25 07:32:57.525792] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:50.379 [2024-07-25 07:32:57.525798] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:51.022 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:51.023 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.285 [2024-07-25 07:32:58.396363] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:51.285 [2024-07-25 07:32:58.396386] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:51.285 [2024-07-25 07:32:58.403687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.285 [2024-07-25 07:32:58.403713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.285 [2024-07-25 07:32:58.403723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.285 [2024-07-25 07:32:58.403730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.285 [2024-07-25 07:32:58.403738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.285 [2024-07-25 07:32:58.403745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.285 [2024-07-25 07:32:58.403753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.285 [2024-07-25 07:32:58.403760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.285 [2024-07-25 07:32:58.403767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.285 [2024-07-25 07:32:58.413701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.285 [2024-07-25 07:32:58.423741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.285 [2024-07-25 07:32:58.424210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.285 [2024-07-25 07:32:58.424227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3ca20 with addr=10.0.0.2, port=4420 00:26:51.285 [2024-07-25 07:32:58.424235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.285 [2024-07-25 07:32:58.424246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.285 [2024-07-25 07:32:58.424257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.285 [2024-07-25 07:32:58.424265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.285 [2024-07-25 07:32:58.424273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.285 [2024-07-25 07:32:58.424285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.285 [2024-07-25 07:32:58.433797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.285 [2024-07-25 07:32:58.434073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.285 [2024-07-25 07:32:58.434092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3ca20 with addr=10.0.0.2, port=4420 00:26:51.285 [2024-07-25 07:32:58.434100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.285 [2024-07-25 07:32:58.434117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.285 [2024-07-25 07:32:58.434127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.285 [2024-07-25 07:32:58.434134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.285 [2024-07-25 07:32:58.434141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.285 [2024-07-25 07:32:58.434152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.285 [2024-07-25 07:32:58.443852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.285 [2024-07-25 07:32:58.444174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.285 [2024-07-25 07:32:58.444188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3ca20 with addr=10.0.0.2, port=4420 00:26:51.285 [2024-07-25 07:32:58.444196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.285 [2024-07-25 07:32:58.444212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.285 [2024-07-25 07:32:58.444223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.285 [2024-07-25 07:32:58.444229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.285 [2024-07-25 07:32:58.444236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.285 [2024-07-25 07:32:58.444247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.285 [2024-07-25 07:32:58.453909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.285 [2024-07-25 07:32:58.454157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.285 [2024-07-25 07:32:58.454172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3ca20 with addr=10.0.0.2, port=4420 00:26:51.285 [2024-07-25 07:32:58.454180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.285 [2024-07-25 07:32:58.454192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.285 [2024-07-25 07:32:58.454209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.285 [2024-07-25 07:32:58.454217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.285 [2024-07-25 07:32:58.454225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.285 [2024-07-25 07:32:58.454235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:51.285 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.286 [2024-07-25 07:32:58.463962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.286 [2024-07-25 07:32:58.464237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.286 [2024-07-25 07:32:58.464258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3ca20 with addr=10.0.0.2, port=4420 00:26:51.286 [2024-07-25 07:32:58.464269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.286 [2024-07-25 07:32:58.464283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.286 [2024-07-25 07:32:58.464297] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.286 [2024-07-25 07:32:58.464306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.286 [2024-07-25 07:32:58.464314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.286 [2024-07-25 07:32:58.464326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.286 [2024-07-25 07:32:58.474016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.286 [2024-07-25 07:32:58.474524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.286 [2024-07-25 07:32:58.474563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3ca20 with addr=10.0.0.2, port=4420 00:26:51.286 [2024-07-25 07:32:58.474574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.286 [2024-07-25 07:32:58.474593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.286 [2024-07-25 07:32:58.474604] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.286 [2024-07-25 07:32:58.474611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.286 [2024-07-25 07:32:58.474619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.286 [2024-07-25 07:32:58.474634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.286 [2024-07-25 07:32:58.484074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.286 [2024-07-25 07:32:58.484647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.286 [2024-07-25 07:32:58.484685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b3ca20 with addr=10.0.0.2, port=4420 00:26:51.286 [2024-07-25 07:32:58.484697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3ca20 is same with the state(5) to be set 00:26:51.286 [2024-07-25 07:32:58.484718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3ca20 (9): Bad file descriptor 00:26:51.286 [2024-07-25 07:32:58.484731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.286 [2024-07-25 07:32:58.484739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.286 [2024-07-25 07:32:58.484749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.286 [2024-07-25 07:32:58.484768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.286 [2024-07-25 07:32:58.484884] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:51.286 [2024-07-25 07:32:58.484901] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:51.286 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.548 07:32:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.492 [2024-07-25 07:32:59.855414] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.492 [2024-07-25 07:32:59.855431] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.492 [2024-07-25 07:32:59.855444] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.754 [2024-07-25 07:32:59.944716] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:52.754 [2024-07-25 07:33:00.050277] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:52.754 [2024-07-25 07:33:00.050313] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.754 request: 00:26:52.754 { 00:26:52.754 "name": "nvme", 00:26:52.754 "trtype": "tcp", 00:26:52.754 "traddr": "10.0.0.2", 00:26:52.754 "adrfam": "ipv4", 00:26:52.754 "trsvcid": "8009", 00:26:52.754 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:52.754 "wait_for_attach": true, 00:26:52.754 "method": "bdev_nvme_start_discovery", 00:26:52.754 "req_id": 1 00:26:52.754 } 00:26:52.754 Got JSON-RPC error response 00:26:52.754 response: 00:26:52.754 { 00:26:52.754 "code": -17, 00:26:52.754 "message": "File exists" 00:26:52.754 } 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:52.754 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.016 request: 00:26:53.016 { 00:26:53.016 "name": "nvme_second", 00:26:53.016 "trtype": "tcp", 00:26:53.016 "traddr": "10.0.0.2", 00:26:53.016 "adrfam": "ipv4", 00:26:53.016 "trsvcid": "8009", 00:26:53.016 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:53.016 "wait_for_attach": true, 00:26:53.016 "method": "bdev_nvme_start_discovery", 00:26:53.016 "req_id": 1 00:26:53.016 } 00:26:53.016 Got JSON-RPC error response 00:26:53.016 response: 00:26:53.016 { 00:26:53.016 "code": -17, 00:26:53.016 "message": "File exists" 00:26:53.016 } 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.016 07:33:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.959 [2024-07-25 07:33:01.326639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.959 [2024-07-25 07:33:01.326681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6d220 with addr=10.0.0.2, port=8010 00:26:53.959 [2024-07-25 07:33:01.326696] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:53.959 [2024-07-25 07:33:01.326704] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:53.959 [2024-07-25 07:33:01.326712] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:55.345 [2024-07-25 07:33:02.328933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-25 07:33:02.328958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b6d220 with addr=10.0.0.2, port=8010 00:26:55.345 [2024-07-25 07:33:02.328969] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:55.345 [2024-07-25 07:33:02.328976] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:55.345 [2024-07-25 07:33:02.328983] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:56.288 [2024-07-25 07:33:03.330772] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:56.288 request: 00:26:56.288 { 00:26:56.288 "name": "nvme_second", 00:26:56.288 "trtype": "tcp", 00:26:56.288 "traddr": "10.0.0.2", 00:26:56.288 "adrfam": "ipv4", 00:26:56.288 "trsvcid": "8010", 00:26:56.288 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:56.288 "wait_for_attach": false, 00:26:56.288 "attach_timeout_ms": 3000, 00:26:56.288 "method": "bdev_nvme_start_discovery", 00:26:56.288 "req_id": 1 00:26:56.288 } 00:26:56.288 Got JSON-RPC error response 00:26:56.288 response: 00:26:56.288 { 00:26:56.288 "code": -110, 00:26:56.288 "message": "Connection timed out" 00:26:56.288 } 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 219566 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.288 rmmod nvme_tcp 00:26:56.288 rmmod nvme_fabrics 00:26:56.288 rmmod nvme_keyring 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 219476 ']' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 219476 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 219476 ']' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 219476 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 219476 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 219476' 00:26:56.288 killing process with pid 219476 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 219476 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 219476 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.288 07:33:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.834 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.834 00:26:58.834 real 0m19.658s 00:26:58.834 user 0m22.862s 00:26:58.834 sys 0m6.906s 00:26:58.834 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.834 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.834 ************************************ 00:26:58.834 END TEST nvmf_host_discovery 00:26:58.834 ************************************ 00:26:58.834 07:33:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:58.834 07:33:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:58.834 07:33:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.834 07:33:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.834 ************************************ 00:26:58.834 START TEST nvmf_host_multipath_status 00:26:58.834 ************************************ 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:58.835 * Looking for test storage... 00:26:58.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.835 07:33:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:06.985 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:06.985 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:06.985 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:06.985 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:06.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.986 07:33:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:27:06.986 00:27:06.986 --- 10.0.0.2 ping statistics --- 00:27:06.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.986 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:27:06.986 00:27:06.986 --- 10.0.0.1 ping statistics --- 00:27:06.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.986 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=226243 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 226243 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 226243 ']' 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.986 07:33:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.986 [2024-07-25 07:33:13.251083] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:27:06.986 [2024-07-25 07:33:13.251148] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.986 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.986 [2024-07-25 07:33:13.323413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.986 [2024-07-25 07:33:13.396774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.986 [2024-07-25 07:33:13.396813] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.986 [2024-07-25 07:33:13.396821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.986 [2024-07-25 07:33:13.396827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.986 [2024-07-25 07:33:13.396833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.986 [2024-07-25 07:33:13.396980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.986 [2024-07-25 07:33:13.396982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=226243 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:06.986 [2024-07-25 07:33:14.204750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.986 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:07.247 Malloc0 00:27:07.247 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:07.247 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.509 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.509 [2024-07-25 07:33:14.843806] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.509 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:07.770 [2024-07-25 07:33:14.984130] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:07.770 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=226607 00:27:07.770 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:07.770 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:07.770 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 226607 /var/tmp/bdevperf.sock 00:27:07.770 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 226607 ']' 00:27:07.770 07:33:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.770 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.770 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.770 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.770 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:08.713 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.713 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:27:08.713 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:08.713 07:33:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:09.285 Nvme0n1 00:27:09.285 07:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:09.545 Nvme0n1 00:27:09.545 07:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:09.545 07:33:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:11.639 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:11.639 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:11.639 07:33:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:11.899 07:33:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:12.842 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:12.842 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:12.842 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.842 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.103 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:13.364 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.364 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:13.364 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.364 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.625 07:33:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.886 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.886 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:13.886 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:14.147 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:14.147 07:33:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:15.113 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:15.113 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:15.113 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.113 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.373 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.374 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:15.374 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.374 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.635 07:33:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.894 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.895 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.895 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.895 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.155 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.155 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:16.155 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.155 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.155 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.155 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:16.155 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:16.418 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:16.679 07:33:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:17.622 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:17.622 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:17.622 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.622 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.883 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.883 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:17.883 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.883 07:33:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.883 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.883 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.883 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.883 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.144 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.144 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.144 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.144 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.405 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:18.665 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.665 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:18.665 07:33:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:18.665 07:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:18.926 07:33:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:19.869 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:19.869 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:19.869 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.869 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.129 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.129 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:20.129 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.129 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.389 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:20.650 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.650 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:20.650 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.650 07:33:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:20.911 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.911 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:20.912 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.912 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:20.912 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.912 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:20.912 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:21.173 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:21.173 07:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.560 07:33:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.821 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.082 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.082 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:23.082 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.082 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.341 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.341 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:23.341 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:23.341 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:23.602 07:33:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:24.545 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:24.545 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:24.545 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.545 07:33:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:24.806 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:24.806 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:24.806 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.806 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:25.067 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.329 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.329 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:25.329 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.329 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:25.589 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.589 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:25.589 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.589 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:25.589 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.589 07:33:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:25.849 07:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:25.849 07:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:26.109 07:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:26.109 07:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:27.049 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:27.049 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:27.049 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.049 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:27.309 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.309 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:27.309 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.309 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.570 07:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:27.831 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.831 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:27.831 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.831 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:27.831 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.831 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:28.091 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.091 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.091 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.091 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:28.091 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:28.351 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:28.351 07:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.734 07:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:29.734 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.734 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:29.734 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.734 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:29.995 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.995 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:29.995 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.995 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.257 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:30.519 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.519 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:30.519 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:30.781 07:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:30.781 07:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:32.168 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:32.168 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:32.168 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:32.168 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.168 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.168 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:32.168 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.169 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:32.169 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.169 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:32.169 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.169 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:32.429 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.429 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:32.429 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.429 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:32.429 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.429 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:32.691 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.691 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:32.691 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.691 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:32.691 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.691 07:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:32.953 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.953 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:32.953 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:32.953 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:33.214 07:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:34.157 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:34.157 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:34.157 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.158 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:34.419 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.419 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:34.419 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.419 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:34.704 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.704 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:34.704 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.704 07:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:34.704 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.704 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:34.704 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.704 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:35.008 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.008 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:35.008 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.008 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 226607 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 226607 ']' 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 226607 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.270 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 226607 00:27:35.534 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:35.534 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:35.534 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 226607' 00:27:35.534 killing process with pid 226607 00:27:35.534 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 226607 00:27:35.534 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 226607 00:27:35.534 Connection closed with partial response: 00:27:35.534 00:27:35.534 00:27:35.534 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 226607 00:27:35.534 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:35.534 [2024-07-25 07:33:15.047120] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:27:35.534 [2024-07-25 07:33:15.047177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid226607 ] 00:27:35.534 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.534 [2024-07-25 07:33:15.096921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.534 [2024-07-25 07:33:15.148743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.534 Running I/O for 90 seconds... 00:27:35.534 [2024-07-25 07:33:28.343540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.343992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.343998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.344009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.344022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.344033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.344038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.344049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.344055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.344065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.344071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.344081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.344086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.344097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.344103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.344997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.534 [2024-07-25 07:33:28.345135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.534 [2024-07-25 07:33:28.345288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.534 [2024-07-25 07:33:28.345301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.345979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.345985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.535 [2024-07-25 07:33:28.346305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.535 [2024-07-25 07:33:28.346321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:28.346714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:28.346720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.451824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.451861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.451893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.451900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.451980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.451989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.536 [2024-07-25 07:33:40.452006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.536 [2024-07-25 07:33:40.452510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.536 [2024-07-25 07:33:40.452526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.536 [2024-07-25 07:33:40.452542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.536 [2024-07-25 07:33:40.452593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.536 [2024-07-25 07:33:40.452785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.536 [2024-07-25 07:33:40.452790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.536 Received shutdown signal, test time was about 25.798002 seconds 00:27:35.536 00:27:35.536 Latency(us) 00:27:35.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.537 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:35.537 Verification LBA range: start 0x0 length 0x4000 00:27:35.537 Nvme0n1 : 25.80 11131.75 43.48 0.00 0.00 11479.95 402.77 3019898.88 00:27:35.537 =================================================================================================================== 00:27:35.537 Total : 11131.75 43.48 0.00 0.00 11479.95 402.77 3019898.88 00:27:35.537 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.798 rmmod nvme_tcp 00:27:35.798 rmmod nvme_fabrics 00:27:35.798 rmmod nvme_keyring 00:27:35.798 07:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 226243 ']' 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 226243 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 226243 ']' 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 226243 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 226243 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 226243' 00:27:35.798 killing process with pid 226243 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 226243 00:27:35.798 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 226243 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.059 07:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.972 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.972 00:27:37.972 real 0m39.486s 00:27:37.972 user 1m41.835s 00:27:37.972 sys 0m10.887s 00:27:37.972 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.972 07:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:37.972 ************************************ 00:27:37.972 END TEST nvmf_host_multipath_status 00:27:37.972 ************************************ 00:27:37.972 07:33:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:37.972 07:33:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:37.972 07:33:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.972 07:33:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.233 ************************************ 00:27:38.233 START TEST nvmf_discovery_remove_ifc 00:27:38.233 ************************************ 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:38.233 * Looking for test storage... 00:27:38.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.233 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.234 07:33:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:44.819 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:44.819 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:44.819 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:44.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.819 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:27:45.081 00:27:45.081 --- 10.0.0.2 ping statistics --- 00:27:45.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.081 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:27:45.081 00:27:45.081 --- 10.0.0.1 ping statistics --- 00:27:45.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.081 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.081 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=236326 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 236326 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 236326 ']' 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.342 07:33:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.342 [2024-07-25 07:33:52.519106] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:27:45.342 [2024-07-25 07:33:52.519196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.342 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.342 [2024-07-25 07:33:52.610594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.342 [2024-07-25 07:33:52.703502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.342 [2024-07-25 07:33:52.703565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.342 [2024-07-25 07:33:52.703573] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.342 [2024-07-25 07:33:52.703580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.342 [2024-07-25 07:33:52.703587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.342 [2024-07-25 07:33:52.703613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.289 [2024-07-25 07:33:53.362567] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.289 [2024-07-25 07:33:53.370793] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:46.289 null0 00:27:46.289 [2024-07-25 07:33:53.402756] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=236508 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 236508 /tmp/host.sock 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 236508 ']' 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:46.289 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.289 07:33:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.289 [2024-07-25 07:33:53.476376] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:27:46.289 [2024-07-25 07:33:53.476439] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236508 ] 00:27:46.289 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.289 [2024-07-25 07:33:53.540263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.289 [2024-07-25 07:33:53.614327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.231 07:33:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.174 [2024-07-25 07:33:55.373429] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:48.174 [2024-07-25 07:33:55.373448] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:48.174 [2024-07-25 07:33:55.373463] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:48.174 [2024-07-25 07:33:55.462757] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:48.436 [2024-07-25 07:33:55.687066] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:48.436 [2024-07-25 07:33:55.687116] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:48.436 [2024-07-25 07:33:55.687141] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:48.436 [2024-07-25 07:33:55.687155] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:48.436 [2024-07-25 07:33:55.687175] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.436 [2024-07-25 07:33:55.691248] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12b97c0 was disconnected and freed. delete nvme_qpair. 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:48.436 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:48.697 07:33:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:49.641 07:33:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.028 07:33:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.028 07:33:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:51.028 07:33:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.971 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.971 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.971 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.971 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.971 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.971 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.972 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.972 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.972 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:51.972 07:33:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.914 07:34:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.857 [2024-07-25 07:34:01.127312] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:53.857 [2024-07-25 07:34:01.127368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.857 [2024-07-25 07:34:01.127380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.857 [2024-07-25 07:34:01.127390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.857 [2024-07-25 07:34:01.127397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.857 [2024-07-25 07:34:01.127405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.857 [2024-07-25 07:34:01.127412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.857 [2024-07-25 07:34:01.127420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.858 [2024-07-25 07:34:01.127427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.858 [2024-07-25 07:34:01.127435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.858 [2024-07-25 07:34:01.127443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.858 [2024-07-25 07:34:01.127450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280170 is same with the state(5) to be set 00:27:53.858 [2024-07-25 07:34:01.137333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1280170 (9): Bad file descriptor 00:27:53.858 07:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.858 07:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.858 07:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.858 07:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.858 [2024-07-25 07:34:01.147375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.858 07:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.858 07:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.858 07:34:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:55.312 [2024-07-25 07:34:02.193230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:55.312 [2024-07-25 07:34:02.193278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1280170 with addr=10.0.0.2, port=4420 00:27:55.312 [2024-07-25 07:34:02.193292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280170 is same with the state(5) to be set 00:27:55.312 [2024-07-25 07:34:02.193318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1280170 (9): Bad file descriptor 00:27:55.312 [2024-07-25 07:34:02.193698] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:55.312 [2024-07-25 07:34:02.193723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:55.312 [2024-07-25 07:34:02.193731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:55.312 [2024-07-25 07:34:02.193740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:55.312 [2024-07-25 07:34:02.193757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.312 [2024-07-25 07:34:02.193766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:55.312 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.312 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:55.313 07:34:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:55.885 [2024-07-25 07:34:03.196149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:55.885 [2024-07-25 07:34:03.196173] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:55.885 [2024-07-25 07:34:03.196181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:55.885 [2024-07-25 07:34:03.196189] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:55.885 [2024-07-25 07:34:03.196206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:55.885 [2024-07-25 07:34:03.196226] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:55.885 [2024-07-25 07:34:03.196248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.885 [2024-07-25 07:34:03.196259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.885 [2024-07-25 07:34:03.196270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.885 [2024-07-25 07:34:03.196277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.885 [2024-07-25 07:34:03.196286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.885 [2024-07-25 07:34:03.196293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.885 [2024-07-25 07:34:03.196301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.885 [2024-07-25 07:34:03.196309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.885 [2024-07-25 07:34:03.196317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.885 [2024-07-25 07:34:03.196325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.885 [2024-07-25 07:34:03.196332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:55.885 [2024-07-25 07:34:03.196632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127f610 (9): Bad file descriptor 00:27:55.885 [2024-07-25 07:34:03.197643] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:55.885 [2024-07-25 07:34:03.197655] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:55.885 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:55.885 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:55.885 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.885 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:55.885 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.886 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.886 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:55.886 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:56.151 07:34:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.095 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.356 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:57.356 07:34:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.928 [2024-07-25 07:34:05.216761] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:57.928 [2024-07-25 07:34:05.216781] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:57.928 [2024-07-25 07:34:05.216796] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:58.188 [2024-07-25 07:34:05.347222] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:58.188 [2024-07-25 07:34:05.409367] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:58.188 [2024-07-25 07:34:05.409406] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:58.188 [2024-07-25 07:34:05.409428] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:58.188 [2024-07-25 07:34:05.409443] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:58.188 [2024-07-25 07:34:05.409450] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:58.188 [2024-07-25 07:34:05.415124] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1286fd0 was disconnected and freed. delete nvme_qpair. 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 236508 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 236508 ']' 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 236508 00:27:58.188 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236508 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236508' 00:27:58.449 killing process with pid 236508 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 236508 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 236508 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:58.449 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.450 rmmod nvme_tcp 00:27:58.450 rmmod nvme_fabrics 00:27:58.450 rmmod nvme_keyring 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 236326 ']' 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 236326 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 236326 ']' 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 236326 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:58.450 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 236326 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 236326' 00:27:58.711 killing process with pid 236326 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 236326 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 236326 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.711 07:34:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.259 00:28:01.259 real 0m22.672s 00:28:01.259 user 0m27.164s 00:28:01.259 sys 0m6.447s 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.259 ************************************ 00:28:01.259 END TEST nvmf_discovery_remove_ifc 00:28:01.259 ************************************ 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.259 ************************************ 00:28:01.259 START TEST nvmf_identify_kernel_target 00:28:01.259 ************************************ 00:28:01.259 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:01.259 * Looking for test storage... 00:28:01.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.260 07:34:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.853 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:07.854 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:07.854 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:07.854 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:07.854 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.854 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:08.115 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.115 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.115 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.115 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:08.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:28:08.115 00:28:08.115 --- 10.0.0.2 ping statistics --- 00:28:08.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.115 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:28:08.116 00:28:08.116 --- 10.0.0.1 ping statistics --- 00:28:08.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.116 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:08.116 07:34:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:11.416 Waiting for block devices as requested 00:28:11.416 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:11.416 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:11.677 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:11.677 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:11.677 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:11.937 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:11.937 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:11.937 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:12.198 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:12.198 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.458 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.458 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.458 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.458 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.719 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.719 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:12.719 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:12.979 No valid GPT data, bailing 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:12.979 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:13.241 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:13.241 00:28:13.241 Discovery Log Number of Records 2, Generation counter 2 00:28:13.241 =====Discovery Log Entry 0====== 00:28:13.241 trtype: tcp 00:28:13.241 adrfam: ipv4 00:28:13.241 subtype: current discovery subsystem 00:28:13.241 treq: not specified, sq flow control disable supported 00:28:13.242 portid: 1 00:28:13.242 trsvcid: 4420 00:28:13.242 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:13.242 traddr: 10.0.0.1 00:28:13.242 eflags: none 00:28:13.242 sectype: none 00:28:13.242 =====Discovery Log Entry 1====== 00:28:13.242 trtype: tcp 00:28:13.242 adrfam: ipv4 00:28:13.242 subtype: nvme subsystem 00:28:13.242 treq: not specified, sq flow control disable supported 00:28:13.242 portid: 1 00:28:13.242 trsvcid: 4420 00:28:13.242 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:13.242 traddr: 10.0.0.1 00:28:13.242 eflags: none 00:28:13.242 sectype: none 00:28:13.242 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:13.242 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:13.242 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.242 ===================================================== 00:28:13.242 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:13.242 ===================================================== 00:28:13.242 Controller Capabilities/Features 00:28:13.242 ================================ 00:28:13.242 Vendor ID: 0000 00:28:13.242 Subsystem Vendor ID: 0000 00:28:13.242 Serial Number: d92d14067a8bae89ad27 00:28:13.242 Model Number: Linux 00:28:13.242 Firmware Version: 6.7.0-68 00:28:13.242 Recommended Arb Burst: 0 00:28:13.242 IEEE OUI Identifier: 00 00 00 00:28:13.242 Multi-path I/O 00:28:13.242 May have multiple subsystem ports: No 00:28:13.242 May have multiple controllers: No 00:28:13.242 Associated with SR-IOV VF: No 00:28:13.242 Max Data Transfer Size: Unlimited 00:28:13.242 Max Number of Namespaces: 0 00:28:13.242 Max Number of I/O Queues: 1024 00:28:13.242 NVMe Specification Version (VS): 1.3 00:28:13.242 NVMe Specification Version (Identify): 1.3 00:28:13.242 Maximum Queue Entries: 1024 00:28:13.242 Contiguous Queues Required: No 00:28:13.242 Arbitration Mechanisms Supported 00:28:13.242 Weighted Round Robin: Not Supported 00:28:13.242 Vendor Specific: Not Supported 00:28:13.242 Reset Timeout: 7500 ms 00:28:13.242 Doorbell Stride: 4 bytes 00:28:13.242 NVM Subsystem Reset: Not Supported 00:28:13.242 Command Sets Supported 00:28:13.242 NVM Command Set: Supported 00:28:13.242 Boot Partition: Not Supported 00:28:13.242 Memory Page Size Minimum: 4096 bytes 00:28:13.242 Memory Page Size Maximum: 4096 bytes 00:28:13.242 Persistent Memory Region: Not Supported 00:28:13.242 Optional Asynchronous Events Supported 00:28:13.242 Namespace Attribute Notices: Not Supported 00:28:13.242 Firmware Activation Notices: Not Supported 00:28:13.242 ANA Change Notices: Not Supported 00:28:13.242 PLE Aggregate Log Change Notices: Not Supported 00:28:13.242 LBA Status Info Alert Notices: Not Supported 00:28:13.242 EGE Aggregate Log Change Notices: Not Supported 00:28:13.242 Normal NVM Subsystem Shutdown event: Not Supported 00:28:13.242 Zone Descriptor Change Notices: Not Supported 00:28:13.242 Discovery Log Change Notices: Supported 00:28:13.242 Controller Attributes 00:28:13.242 128-bit Host Identifier: Not Supported 00:28:13.242 Non-Operational Permissive Mode: Not Supported 00:28:13.242 NVM Sets: Not Supported 00:28:13.242 Read Recovery Levels: Not Supported 00:28:13.242 Endurance Groups: Not Supported 00:28:13.242 Predictable Latency Mode: Not Supported 00:28:13.242 Traffic Based Keep ALive: Not Supported 00:28:13.242 Namespace Granularity: Not Supported 00:28:13.242 SQ Associations: Not Supported 00:28:13.242 UUID List: Not Supported 00:28:13.242 Multi-Domain Subsystem: Not Supported 00:28:13.242 Fixed Capacity Management: Not Supported 00:28:13.242 Variable Capacity Management: Not Supported 00:28:13.242 Delete Endurance Group: Not Supported 00:28:13.242 Delete NVM Set: Not Supported 00:28:13.242 Extended LBA Formats Supported: Not Supported 00:28:13.242 Flexible Data Placement Supported: Not Supported 00:28:13.242 00:28:13.242 Controller Memory Buffer Support 00:28:13.242 ================================ 00:28:13.242 Supported: No 00:28:13.242 00:28:13.242 Persistent Memory Region Support 00:28:13.242 ================================ 00:28:13.242 Supported: No 00:28:13.242 00:28:13.242 Admin Command Set Attributes 00:28:13.242 ============================ 00:28:13.242 Security Send/Receive: Not Supported 00:28:13.242 Format NVM: Not Supported 00:28:13.242 Firmware Activate/Download: Not Supported 00:28:13.242 Namespace Management: Not Supported 00:28:13.242 Device Self-Test: Not Supported 00:28:13.242 Directives: Not Supported 00:28:13.242 NVMe-MI: Not Supported 00:28:13.242 Virtualization Management: Not Supported 00:28:13.242 Doorbell Buffer Config: Not Supported 00:28:13.242 Get LBA Status Capability: Not Supported 00:28:13.242 Command & Feature Lockdown Capability: Not Supported 00:28:13.242 Abort Command Limit: 1 00:28:13.242 Async Event Request Limit: 1 00:28:13.242 Number of Firmware Slots: N/A 00:28:13.242 Firmware Slot 1 Read-Only: N/A 00:28:13.242 Firmware Activation Without Reset: N/A 00:28:13.242 Multiple Update Detection Support: N/A 00:28:13.242 Firmware Update Granularity: No Information Provided 00:28:13.242 Per-Namespace SMART Log: No 00:28:13.242 Asymmetric Namespace Access Log Page: Not Supported 00:28:13.242 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:13.242 Command Effects Log Page: Not Supported 00:28:13.242 Get Log Page Extended Data: Supported 00:28:13.242 Telemetry Log Pages: Not Supported 00:28:13.242 Persistent Event Log Pages: Not Supported 00:28:13.242 Supported Log Pages Log Page: May Support 00:28:13.242 Commands Supported & Effects Log Page: Not Supported 00:28:13.242 Feature Identifiers & Effects Log Page:May Support 00:28:13.242 NVMe-MI Commands & Effects Log Page: May Support 00:28:13.242 Data Area 4 for Telemetry Log: Not Supported 00:28:13.242 Error Log Page Entries Supported: 1 00:28:13.242 Keep Alive: Not Supported 00:28:13.242 00:28:13.242 NVM Command Set Attributes 00:28:13.242 ========================== 00:28:13.242 Submission Queue Entry Size 00:28:13.242 Max: 1 00:28:13.242 Min: 1 00:28:13.242 Completion Queue Entry Size 00:28:13.242 Max: 1 00:28:13.242 Min: 1 00:28:13.242 Number of Namespaces: 0 00:28:13.242 Compare Command: Not Supported 00:28:13.242 Write Uncorrectable Command: Not Supported 00:28:13.242 Dataset Management Command: Not Supported 00:28:13.242 Write Zeroes Command: Not Supported 00:28:13.242 Set Features Save Field: Not Supported 00:28:13.242 Reservations: Not Supported 00:28:13.242 Timestamp: Not Supported 00:28:13.242 Copy: Not Supported 00:28:13.242 Volatile Write Cache: Not Present 00:28:13.242 Atomic Write Unit (Normal): 1 00:28:13.242 Atomic Write Unit (PFail): 1 00:28:13.242 Atomic Compare & Write Unit: 1 00:28:13.242 Fused Compare & Write: Not Supported 00:28:13.242 Scatter-Gather List 00:28:13.242 SGL Command Set: Supported 00:28:13.242 SGL Keyed: Not Supported 00:28:13.242 SGL Bit Bucket Descriptor: Not Supported 00:28:13.242 SGL Metadata Pointer: Not Supported 00:28:13.242 Oversized SGL: Not Supported 00:28:13.242 SGL Metadata Address: Not Supported 00:28:13.242 SGL Offset: Supported 00:28:13.242 Transport SGL Data Block: Not Supported 00:28:13.242 Replay Protected Memory Block: Not Supported 00:28:13.242 00:28:13.242 Firmware Slot Information 00:28:13.242 ========================= 00:28:13.242 Active slot: 0 00:28:13.242 00:28:13.242 00:28:13.242 Error Log 00:28:13.242 ========= 00:28:13.242 00:28:13.242 Active Namespaces 00:28:13.242 ================= 00:28:13.242 Discovery Log Page 00:28:13.242 ================== 00:28:13.242 Generation Counter: 2 00:28:13.242 Number of Records: 2 00:28:13.242 Record Format: 0 00:28:13.242 00:28:13.242 Discovery Log Entry 0 00:28:13.242 ---------------------- 00:28:13.242 Transport Type: 3 (TCP) 00:28:13.242 Address Family: 1 (IPv4) 00:28:13.242 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:13.242 Entry Flags: 00:28:13.242 Duplicate Returned Information: 0 00:28:13.242 Explicit Persistent Connection Support for Discovery: 0 00:28:13.242 Transport Requirements: 00:28:13.242 Secure Channel: Not Specified 00:28:13.242 Port ID: 1 (0x0001) 00:28:13.242 Controller ID: 65535 (0xffff) 00:28:13.242 Admin Max SQ Size: 32 00:28:13.242 Transport Service Identifier: 4420 00:28:13.242 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:13.242 Transport Address: 10.0.0.1 00:28:13.242 Discovery Log Entry 1 00:28:13.242 ---------------------- 00:28:13.242 Transport Type: 3 (TCP) 00:28:13.242 Address Family: 1 (IPv4) 00:28:13.242 Subsystem Type: 2 (NVM Subsystem) 00:28:13.243 Entry Flags: 00:28:13.243 Duplicate Returned Information: 0 00:28:13.243 Explicit Persistent Connection Support for Discovery: 0 00:28:13.243 Transport Requirements: 00:28:13.243 Secure Channel: Not Specified 00:28:13.243 Port ID: 1 (0x0001) 00:28:13.243 Controller ID: 65535 (0xffff) 00:28:13.243 Admin Max SQ Size: 32 00:28:13.243 Transport Service Identifier: 4420 00:28:13.243 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:13.243 Transport Address: 10.0.0.1 00:28:13.243 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:13.243 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.505 get_feature(0x01) failed 00:28:13.505 get_feature(0x02) failed 00:28:13.505 get_feature(0x04) failed 00:28:13.505 ===================================================== 00:28:13.505 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:13.505 ===================================================== 00:28:13.505 Controller Capabilities/Features 00:28:13.505 ================================ 00:28:13.505 Vendor ID: 0000 00:28:13.505 Subsystem Vendor ID: 0000 00:28:13.505 Serial Number: b99c75b925494237260c 00:28:13.505 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:13.505 Firmware Version: 6.7.0-68 00:28:13.505 Recommended Arb Burst: 6 00:28:13.505 IEEE OUI Identifier: 00 00 00 00:28:13.505 Multi-path I/O 00:28:13.505 May have multiple subsystem ports: Yes 00:28:13.505 May have multiple controllers: Yes 00:28:13.505 Associated with SR-IOV VF: No 00:28:13.505 Max Data Transfer Size: Unlimited 00:28:13.505 Max Number of Namespaces: 1024 00:28:13.505 Max Number of I/O Queues: 128 00:28:13.505 NVMe Specification Version (VS): 1.3 00:28:13.505 NVMe Specification Version (Identify): 1.3 00:28:13.505 Maximum Queue Entries: 1024 00:28:13.505 Contiguous Queues Required: No 00:28:13.505 Arbitration Mechanisms Supported 00:28:13.505 Weighted Round Robin: Not Supported 00:28:13.505 Vendor Specific: Not Supported 00:28:13.505 Reset Timeout: 7500 ms 00:28:13.505 Doorbell Stride: 4 bytes 00:28:13.505 NVM Subsystem Reset: Not Supported 00:28:13.505 Command Sets Supported 00:28:13.505 NVM Command Set: Supported 00:28:13.505 Boot Partition: Not Supported 00:28:13.505 Memory Page Size Minimum: 4096 bytes 00:28:13.505 Memory Page Size Maximum: 4096 bytes 00:28:13.505 Persistent Memory Region: Not Supported 00:28:13.505 Optional Asynchronous Events Supported 00:28:13.505 Namespace Attribute Notices: Supported 00:28:13.505 Firmware Activation Notices: Not Supported 00:28:13.505 ANA Change Notices: Supported 00:28:13.505 PLE Aggregate Log Change Notices: Not Supported 00:28:13.505 LBA Status Info Alert Notices: Not Supported 00:28:13.505 EGE Aggregate Log Change Notices: Not Supported 00:28:13.505 Normal NVM Subsystem Shutdown event: Not Supported 00:28:13.505 Zone Descriptor Change Notices: Not Supported 00:28:13.505 Discovery Log Change Notices: Not Supported 00:28:13.505 Controller Attributes 00:28:13.505 128-bit Host Identifier: Supported 00:28:13.505 Non-Operational Permissive Mode: Not Supported 00:28:13.505 NVM Sets: Not Supported 00:28:13.505 Read Recovery Levels: Not Supported 00:28:13.505 Endurance Groups: Not Supported 00:28:13.505 Predictable Latency Mode: Not Supported 00:28:13.505 Traffic Based Keep ALive: Supported 00:28:13.505 Namespace Granularity: Not Supported 00:28:13.505 SQ Associations: Not Supported 00:28:13.505 UUID List: Not Supported 00:28:13.505 Multi-Domain Subsystem: Not Supported 00:28:13.505 Fixed Capacity Management: Not Supported 00:28:13.505 Variable Capacity Management: Not Supported 00:28:13.505 Delete Endurance Group: Not Supported 00:28:13.505 Delete NVM Set: Not Supported 00:28:13.505 Extended LBA Formats Supported: Not Supported 00:28:13.505 Flexible Data Placement Supported: Not Supported 00:28:13.505 00:28:13.505 Controller Memory Buffer Support 00:28:13.505 ================================ 00:28:13.505 Supported: No 00:28:13.505 00:28:13.505 Persistent Memory Region Support 00:28:13.505 ================================ 00:28:13.505 Supported: No 00:28:13.505 00:28:13.505 Admin Command Set Attributes 00:28:13.505 ============================ 00:28:13.505 Security Send/Receive: Not Supported 00:28:13.505 Format NVM: Not Supported 00:28:13.505 Firmware Activate/Download: Not Supported 00:28:13.505 Namespace Management: Not Supported 00:28:13.505 Device Self-Test: Not Supported 00:28:13.505 Directives: Not Supported 00:28:13.505 NVMe-MI: Not Supported 00:28:13.505 Virtualization Management: Not Supported 00:28:13.505 Doorbell Buffer Config: Not Supported 00:28:13.505 Get LBA Status Capability: Not Supported 00:28:13.505 Command & Feature Lockdown Capability: Not Supported 00:28:13.505 Abort Command Limit: 4 00:28:13.505 Async Event Request Limit: 4 00:28:13.505 Number of Firmware Slots: N/A 00:28:13.505 Firmware Slot 1 Read-Only: N/A 00:28:13.505 Firmware Activation Without Reset: N/A 00:28:13.505 Multiple Update Detection Support: N/A 00:28:13.505 Firmware Update Granularity: No Information Provided 00:28:13.505 Per-Namespace SMART Log: Yes 00:28:13.505 Asymmetric Namespace Access Log Page: Supported 00:28:13.505 ANA Transition Time : 10 sec 00:28:13.505 00:28:13.505 Asymmetric Namespace Access Capabilities 00:28:13.505 ANA Optimized State : Supported 00:28:13.505 ANA Non-Optimized State : Supported 00:28:13.505 ANA Inaccessible State : Supported 00:28:13.505 ANA Persistent Loss State : Supported 00:28:13.505 ANA Change State : Supported 00:28:13.505 ANAGRPID is not changed : No 00:28:13.505 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:13.505 00:28:13.505 ANA Group Identifier Maximum : 128 00:28:13.505 Number of ANA Group Identifiers : 128 00:28:13.505 Max Number of Allowed Namespaces : 1024 00:28:13.505 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:13.506 Command Effects Log Page: Supported 00:28:13.506 Get Log Page Extended Data: Supported 00:28:13.506 Telemetry Log Pages: Not Supported 00:28:13.506 Persistent Event Log Pages: Not Supported 00:28:13.506 Supported Log Pages Log Page: May Support 00:28:13.506 Commands Supported & Effects Log Page: Not Supported 00:28:13.506 Feature Identifiers & Effects Log Page:May Support 00:28:13.506 NVMe-MI Commands & Effects Log Page: May Support 00:28:13.506 Data Area 4 for Telemetry Log: Not Supported 00:28:13.506 Error Log Page Entries Supported: 128 00:28:13.506 Keep Alive: Supported 00:28:13.506 Keep Alive Granularity: 1000 ms 00:28:13.506 00:28:13.506 NVM Command Set Attributes 00:28:13.506 ========================== 00:28:13.506 Submission Queue Entry Size 00:28:13.506 Max: 64 00:28:13.506 Min: 64 00:28:13.506 Completion Queue Entry Size 00:28:13.506 Max: 16 00:28:13.506 Min: 16 00:28:13.506 Number of Namespaces: 1024 00:28:13.506 Compare Command: Not Supported 00:28:13.506 Write Uncorrectable Command: Not Supported 00:28:13.506 Dataset Management Command: Supported 00:28:13.506 Write Zeroes Command: Supported 00:28:13.506 Set Features Save Field: Not Supported 00:28:13.506 Reservations: Not Supported 00:28:13.506 Timestamp: Not Supported 00:28:13.506 Copy: Not Supported 00:28:13.506 Volatile Write Cache: Present 00:28:13.506 Atomic Write Unit (Normal): 1 00:28:13.506 Atomic Write Unit (PFail): 1 00:28:13.506 Atomic Compare & Write Unit: 1 00:28:13.506 Fused Compare & Write: Not Supported 00:28:13.506 Scatter-Gather List 00:28:13.506 SGL Command Set: Supported 00:28:13.506 SGL Keyed: Not Supported 00:28:13.506 SGL Bit Bucket Descriptor: Not Supported 00:28:13.506 SGL Metadata Pointer: Not Supported 00:28:13.506 Oversized SGL: Not Supported 00:28:13.506 SGL Metadata Address: Not Supported 00:28:13.506 SGL Offset: Supported 00:28:13.506 Transport SGL Data Block: Not Supported 00:28:13.506 Replay Protected Memory Block: Not Supported 00:28:13.506 00:28:13.506 Firmware Slot Information 00:28:13.506 ========================= 00:28:13.506 Active slot: 0 00:28:13.506 00:28:13.506 Asymmetric Namespace Access 00:28:13.506 =========================== 00:28:13.506 Change Count : 0 00:28:13.506 Number of ANA Group Descriptors : 1 00:28:13.506 ANA Group Descriptor : 0 00:28:13.506 ANA Group ID : 1 00:28:13.506 Number of NSID Values : 1 00:28:13.506 Change Count : 0 00:28:13.506 ANA State : 1 00:28:13.506 Namespace Identifier : 1 00:28:13.506 00:28:13.506 Commands Supported and Effects 00:28:13.506 ============================== 00:28:13.506 Admin Commands 00:28:13.506 -------------- 00:28:13.506 Get Log Page (02h): Supported 00:28:13.506 Identify (06h): Supported 00:28:13.506 Abort (08h): Supported 00:28:13.506 Set Features (09h): Supported 00:28:13.506 Get Features (0Ah): Supported 00:28:13.506 Asynchronous Event Request (0Ch): Supported 00:28:13.506 Keep Alive (18h): Supported 00:28:13.506 I/O Commands 00:28:13.506 ------------ 00:28:13.506 Flush (00h): Supported 00:28:13.506 Write (01h): Supported LBA-Change 00:28:13.506 Read (02h): Supported 00:28:13.506 Write Zeroes (08h): Supported LBA-Change 00:28:13.506 Dataset Management (09h): Supported 00:28:13.506 00:28:13.506 Error Log 00:28:13.506 ========= 00:28:13.506 Entry: 0 00:28:13.506 Error Count: 0x3 00:28:13.506 Submission Queue Id: 0x0 00:28:13.506 Command Id: 0x5 00:28:13.506 Phase Bit: 0 00:28:13.506 Status Code: 0x2 00:28:13.506 Status Code Type: 0x0 00:28:13.506 Do Not Retry: 1 00:28:13.506 Error Location: 0x28 00:28:13.506 LBA: 0x0 00:28:13.506 Namespace: 0x0 00:28:13.506 Vendor Log Page: 0x0 00:28:13.506 ----------- 00:28:13.506 Entry: 1 00:28:13.506 Error Count: 0x2 00:28:13.506 Submission Queue Id: 0x0 00:28:13.506 Command Id: 0x5 00:28:13.506 Phase Bit: 0 00:28:13.506 Status Code: 0x2 00:28:13.506 Status Code Type: 0x0 00:28:13.506 Do Not Retry: 1 00:28:13.506 Error Location: 0x28 00:28:13.506 LBA: 0x0 00:28:13.506 Namespace: 0x0 00:28:13.506 Vendor Log Page: 0x0 00:28:13.506 ----------- 00:28:13.506 Entry: 2 00:28:13.506 Error Count: 0x1 00:28:13.506 Submission Queue Id: 0x0 00:28:13.506 Command Id: 0x4 00:28:13.506 Phase Bit: 0 00:28:13.506 Status Code: 0x2 00:28:13.506 Status Code Type: 0x0 00:28:13.506 Do Not Retry: 1 00:28:13.506 Error Location: 0x28 00:28:13.506 LBA: 0x0 00:28:13.506 Namespace: 0x0 00:28:13.506 Vendor Log Page: 0x0 00:28:13.506 00:28:13.506 Number of Queues 00:28:13.506 ================ 00:28:13.506 Number of I/O Submission Queues: 128 00:28:13.506 Number of I/O Completion Queues: 128 00:28:13.506 00:28:13.506 ZNS Specific Controller Data 00:28:13.506 ============================ 00:28:13.506 Zone Append Size Limit: 0 00:28:13.506 00:28:13.506 00:28:13.506 Active Namespaces 00:28:13.506 ================= 00:28:13.506 get_feature(0x05) failed 00:28:13.506 Namespace ID:1 00:28:13.506 Command Set Identifier: NVM (00h) 00:28:13.506 Deallocate: Supported 00:28:13.506 Deallocated/Unwritten Error: Not Supported 00:28:13.506 Deallocated Read Value: Unknown 00:28:13.506 Deallocate in Write Zeroes: Not Supported 00:28:13.506 Deallocated Guard Field: 0xFFFF 00:28:13.506 Flush: Supported 00:28:13.506 Reservation: Not Supported 00:28:13.506 Namespace Sharing Capabilities: Multiple Controllers 00:28:13.506 Size (in LBAs): 3750748848 (1788GiB) 00:28:13.506 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:13.506 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:13.506 UUID: 4b0e1390-5bf8-43ed-b2ff-40ef41e63c5c 00:28:13.506 Thin Provisioning: Not Supported 00:28:13.506 Per-NS Atomic Units: Yes 00:28:13.506 Atomic Write Unit (Normal): 8 00:28:13.506 Atomic Write Unit (PFail): 8 00:28:13.506 Preferred Write Granularity: 8 00:28:13.506 Atomic Compare & Write Unit: 8 00:28:13.506 Atomic Boundary Size (Normal): 0 00:28:13.506 Atomic Boundary Size (PFail): 0 00:28:13.506 Atomic Boundary Offset: 0 00:28:13.506 NGUID/EUI64 Never Reused: No 00:28:13.506 ANA group ID: 1 00:28:13.506 Namespace Write Protected: No 00:28:13.506 Number of LBA Formats: 1 00:28:13.506 Current LBA Format: LBA Format #00 00:28:13.506 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:13.506 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:13.506 rmmod nvme_tcp 00:28:13.506 rmmod nvme_fabrics 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.506 07:34:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:15.421 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:15.682 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:15.682 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:15.682 07:34:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:18.983 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:18.983 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:19.556 00:28:19.556 real 0m18.558s 00:28:19.556 user 0m5.039s 00:28:19.556 sys 0m10.506s 00:28:19.556 07:34:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:19.556 07:34:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.556 ************************************ 00:28:19.556 END TEST nvmf_identify_kernel_target 00:28:19.556 ************************************ 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.557 ************************************ 00:28:19.557 START TEST nvmf_auth_host 00:28:19.557 ************************************ 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:19.557 * Looking for test storage... 00:28:19.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.557 07:34:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:27.781 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:27.781 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:27.781 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:27.781 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.781 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.782 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:27.782 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.782 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.782 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:27.782 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:27.782 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.782 07:34:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:27.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:28:27.782 00:28:27.782 --- 10.0.0.2 ping statistics --- 00:28:27.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.782 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:28:27.782 00:28:27.782 --- 10.0.0.1 ping statistics --- 00:28:27.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.782 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=250654 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 250654 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 250654 ']' 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.782 07:34:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7830d81c4309015ee4562c373ad768b5 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KW2 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7830d81c4309015ee4562c373ad768b5 0 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7830d81c4309015ee4562c373ad768b5 0 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7830d81c4309015ee4562c373ad768b5 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KW2 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KW2 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.KW2 00:28:27.782 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=366b7d89ec3c638255d71b1b2b278f79ffea2aa933913bf22e16a71f48e2e079 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kDR 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 366b7d89ec3c638255d71b1b2b278f79ffea2aa933913bf22e16a71f48e2e079 3 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 366b7d89ec3c638255d71b1b2b278f79ffea2aa933913bf22e16a71f48e2e079 3 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=366b7d89ec3c638255d71b1b2b278f79ffea2aa933913bf22e16a71f48e2e079 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kDR 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kDR 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kDR 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6788566896ff679e947283aedada3c375ebccc0ef6929e66 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kSU 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6788566896ff679e947283aedada3c375ebccc0ef6929e66 0 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6788566896ff679e947283aedada3c375ebccc0ef6929e66 0 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6788566896ff679e947283aedada3c375ebccc0ef6929e66 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kSU 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kSU 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kSU 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57daa29fdb7a2198acc4fa408a90ccf765428cffdf4e2e15 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fuk 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57daa29fdb7a2198acc4fa408a90ccf765428cffdf4e2e15 2 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57daa29fdb7a2198acc4fa408a90ccf765428cffdf4e2e15 2 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57daa29fdb7a2198acc4fa408a90ccf765428cffdf4e2e15 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fuk 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fuk 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fuk 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:28.044 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ccd7611ca713ae36db8dda80edd24823 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.py1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ccd7611ca713ae36db8dda80edd24823 1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ccd7611ca713ae36db8dda80edd24823 1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ccd7611ca713ae36db8dda80edd24823 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.py1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.py1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.py1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6396108dd2f95209512b215c3a299c00 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9Sn 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6396108dd2f95209512b215c3a299c00 1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6396108dd2f95209512b215c3a299c00 1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6396108dd2f95209512b215c3a299c00 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:28.045 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9Sn 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9Sn 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9Sn 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d9f1de47c5378dba9cc78773acb2fca7756a0034226a71f 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gSI 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d9f1de47c5378dba9cc78773acb2fca7756a0034226a71f 2 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d9f1de47c5378dba9cc78773acb2fca7756a0034226a71f 2 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d9f1de47c5378dba9cc78773acb2fca7756a0034226a71f 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gSI 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gSI 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gSI 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b10ee88bd2a043a874d2953cec1c21c8 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rFr 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b10ee88bd2a043a874d2953cec1c21c8 0 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b10ee88bd2a043a874d2953cec1c21c8 0 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b10ee88bd2a043a874d2953cec1c21c8 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rFr 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rFr 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rFr 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8952f7c64152c395807a20178c56e03513100a9198803232e8cc10b8baee3bfd 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ppE 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8952f7c64152c395807a20178c56e03513100a9198803232e8cc10b8baee3bfd 3 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8952f7c64152c395807a20178c56e03513100a9198803232e8cc10b8baee3bfd 3 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8952f7c64152c395807a20178c56e03513100a9198803232e8cc10b8baee3bfd 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ppE 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ppE 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ppE 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 250654 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 250654 ']' 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:28.306 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.567 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.567 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:28:28.567 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:28.567 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KW2 00:28:28.567 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kDR ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kDR 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kSU 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fuk ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fuk 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.py1 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9Sn ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Sn 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gSI 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rFr ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rFr 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ppE 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:28.568 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:28.829 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:28.829 07:34:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:32.131 Waiting for block devices as requested 00:28:32.131 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:32.131 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:32.131 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:32.131 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:32.131 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:32.131 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:32.131 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:32.391 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:32.391 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:32.652 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:32.652 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:32.652 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:32.912 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:32.912 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:32.912 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:32.912 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:33.173 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:34.115 No valid GPT data, bailing 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:34.115 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:34.115 00:28:34.115 Discovery Log Number of Records 2, Generation counter 2 00:28:34.115 =====Discovery Log Entry 0====== 00:28:34.115 trtype: tcp 00:28:34.115 adrfam: ipv4 00:28:34.115 subtype: current discovery subsystem 00:28:34.115 treq: not specified, sq flow control disable supported 00:28:34.115 portid: 1 00:28:34.115 trsvcid: 4420 00:28:34.115 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:34.115 traddr: 10.0.0.1 00:28:34.115 eflags: none 00:28:34.116 sectype: none 00:28:34.116 =====Discovery Log Entry 1====== 00:28:34.116 trtype: tcp 00:28:34.116 adrfam: ipv4 00:28:34.116 subtype: nvme subsystem 00:28:34.116 treq: not specified, sq flow control disable supported 00:28:34.116 portid: 1 00:28:34.116 trsvcid: 4420 00:28:34.116 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:34.116 traddr: 10.0.0.1 00:28:34.116 eflags: none 00:28:34.116 sectype: none 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.116 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 nvme0n1 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.638 nvme0n1 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.638 07:34:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.899 nvme0n1 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:34.899 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.900 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.161 nvme0n1 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.161 nvme0n1 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.161 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.422 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.422 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.423 nvme0n1 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.423 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.683 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.683 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.683 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.684 07:34:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.684 nvme0n1 00:28:35.684 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.684 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.684 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.684 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.684 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.684 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.945 nvme0n1 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.945 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.205 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.206 nvme0n1 00:28:36.206 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.466 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.466 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.466 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.466 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.467 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.729 nvme0n1 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.729 07:34:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.990 nvme0n1 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.990 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.991 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.252 nvme0n1 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.252 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.513 nvme0n1 00:28:37.513 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.513 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.513 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.513 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.513 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.513 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.773 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.773 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.773 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.773 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.774 07:34:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.035 nvme0n1 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.036 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.036 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.036 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.036 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.297 nvme0n1 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.297 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.870 nvme0n1 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.870 07:34:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.870 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.132 nvme0n1 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.132 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.705 nvme0n1 00:28:39.705 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.705 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.705 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.705 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.705 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.705 07:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.705 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.276 nvme0n1 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.276 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.277 07:34:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.848 nvme0n1 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:40.848 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.849 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.421 nvme0n1 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.421 07:34:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.364 nvme0n1 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.364 07:34:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.936 nvme0n1 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.936 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.197 07:34:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.769 nvme0n1 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.769 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.030 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.640 nvme0n1 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.640 07:34:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.583 nvme0n1 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.583 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.584 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.845 nvme0n1 00:28:45.845 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.845 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.845 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.845 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.845 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.845 07:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.845 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 nvme0n1 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 nvme0n1 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.368 nvme0n1 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.368 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 nvme0n1 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:46.630 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.891 07:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.891 nvme0n1 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.891 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:47.152 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.153 nvme0n1 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.153 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.414 nvme0n1 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.414 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.675 07:34:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.675 nvme0n1 00:28:47.675 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.675 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.675 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.675 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.675 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.675 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.936 nvme0n1 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.936 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.197 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.458 nvme0n1 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.458 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.720 nvme0n1 00:28:48.720 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.720 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.720 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.720 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.720 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.720 07:34:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.720 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.980 nvme0n1 00:28:48.980 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.241 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.242 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.503 nvme0n1 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.503 07:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.764 nvme0n1 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.764 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.025 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.286 nvme0n1 00:28:50.286 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.286 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.286 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.286 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.286 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.286 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.547 07:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.808 nvme0n1 00:28:50.808 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.808 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.808 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.808 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.808 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.069 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.070 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.643 nvme0n1 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.643 07:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.904 nvme0n1 00:28:51.904 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.904 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.904 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.904 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.904 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.165 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.166 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.739 nvme0n1 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.739 07:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 nvme0n1 00:28:53.311 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.311 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.311 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.311 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.311 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.572 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.573 07:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.144 nvme0n1 00:28:54.144 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.144 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.144 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.144 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.144 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.144 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.406 07:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.979 nvme0n1 00:28:54.979 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.979 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.979 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.979 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.979 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.979 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.240 07:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.813 nvme0n1 00:28:55.813 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.813 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.813 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.813 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.813 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.813 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.074 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.075 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.646 nvme0n1 00:28:56.646 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.646 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.646 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.646 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.646 07:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.646 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.908 nvme0n1 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.908 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.170 nvme0n1 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.170 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.171 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.432 nvme0n1 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.432 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.693 nvme0n1 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:57.693 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.694 07:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.694 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.955 nvme0n1 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.955 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.956 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.217 nvme0n1 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.217 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.478 nvme0n1 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.478 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.479 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.740 nvme0n1 00:28:58.740 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.740 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.740 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.740 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.740 07:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.740 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.001 nvme0n1 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.001 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.002 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.263 nvme0n1 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.263 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.264 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.264 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.264 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.525 nvme0n1 00:28:59.525 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.525 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.525 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.525 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.525 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.786 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.787 07:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.048 nvme0n1 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.048 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.309 nvme0n1 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:00.309 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.570 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.872 nvme0n1 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.872 07:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.872 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.133 nvme0n1 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:01.133 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.134 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.706 nvme0n1 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.706 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.707 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.707 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.707 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.707 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.707 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.707 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.707 07:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.279 nvme0n1 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.279 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.851 nvme0n1 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.851 07:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.851 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.112 nvme0n1 00:29:03.112 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.112 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.112 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.112 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.112 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:03.372 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.373 07:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.946 nvme0n1 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzgzMGQ4MWM0MzA5MDE1ZWU0NTYyYzM3M2FkNzY4YjUlo6rj: 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzY2YjdkODllYzNjNjM4MjU1ZDcxYjFiMmIyNzhmNzlmZmVhMmFhOTMzOTEzYmYyMmUxNmE3MWY0OGUyZTA3OVUuWE4=: 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.946 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.518 nvme0n1 00:29:04.518 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.518 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.518 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.518 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.518 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.518 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.779 07:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.351 nvme0n1 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NkNzYxMWNhNzEzYWUzNmRiOGRkYTgwZWRkMjQ4MjMOxtHF: 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: ]] 00:29:05.351 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM5NjEwOGRkMmY5NTIwOTUxMmIyMTVjM2EyOTljMDAa2mj4: 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.612 07:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.183 nvme0n1 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.183 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ5ZjFkZTQ3YzUzNzhkYmE5Y2M3ODc3M2FjYjJmY2E3NzU2YTAwMzQyMjZhNzFmddc5Ew==: 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: ]] 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjEwZWU4OGJkMmEwNDNhODc0ZDI5NTNjZWMxYzIxYzhjWdKc: 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:06.443 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.444 07:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.014 nvme0n1 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.014 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODk1MmY3YzY0MTUyYzM5NTgwN2EyMDE3OGM1NmUwMzUxMzEwMGE5MTk4ODAzMjMyZThjYzEwYjhiYWVlM2JmZEHzYUc=: 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.274 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.275 07:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.845 nvme0n1 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.845 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Njc4ODU2Njg5NmZmNjc5ZTk0NzI4M2FlZGFkYTNjMzc1ZWJjY2MwZWY2OTI5ZTY2Vc1gEg==: 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: ]] 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdkYWEyOWZkYjdhMjE5OGFjYzRmYTQwOGE5MGNjZjc2NTQyOGNmZmRmNGUyZTE1oFCCTQ==: 00:29:08.105 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.106 request: 00:29:08.106 { 00:29:08.106 "name": "nvme0", 00:29:08.106 "trtype": "tcp", 00:29:08.106 "traddr": "10.0.0.1", 00:29:08.106 "adrfam": "ipv4", 00:29:08.106 "trsvcid": "4420", 00:29:08.106 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:08.106 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:08.106 "prchk_reftag": false, 00:29:08.106 "prchk_guard": false, 00:29:08.106 "hdgst": false, 00:29:08.106 "ddgst": false, 00:29:08.106 "method": "bdev_nvme_attach_controller", 00:29:08.106 "req_id": 1 00:29:08.106 } 00:29:08.106 Got JSON-RPC error response 00:29:08.106 response: 00:29:08.106 { 00:29:08.106 "code": -5, 00:29:08.106 "message": "Input/output error" 00:29:08.106 } 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.106 request: 00:29:08.106 { 00:29:08.106 "name": "nvme0", 00:29:08.106 "trtype": "tcp", 00:29:08.106 "traddr": "10.0.0.1", 00:29:08.106 "adrfam": "ipv4", 00:29:08.106 "trsvcid": "4420", 00:29:08.106 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:08.106 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:08.106 "prchk_reftag": false, 00:29:08.106 "prchk_guard": false, 00:29:08.106 "hdgst": false, 00:29:08.106 "ddgst": false, 00:29:08.106 "dhchap_key": "key2", 00:29:08.106 "method": "bdev_nvme_attach_controller", 00:29:08.106 "req_id": 1 00:29:08.106 } 00:29:08.106 Got JSON-RPC error response 00:29:08.106 response: 00:29:08.106 { 00:29:08.106 "code": -5, 00:29:08.106 "message": "Input/output error" 00:29:08.106 } 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.106 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.367 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 request: 00:29:08.368 { 00:29:08.368 "name": "nvme0", 00:29:08.368 "trtype": "tcp", 00:29:08.368 "traddr": "10.0.0.1", 00:29:08.368 "adrfam": "ipv4", 00:29:08.368 "trsvcid": "4420", 00:29:08.368 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:08.368 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:08.368 "prchk_reftag": false, 00:29:08.368 "prchk_guard": false, 00:29:08.368 "hdgst": false, 00:29:08.368 "ddgst": false, 00:29:08.368 "dhchap_key": "key1", 00:29:08.368 "dhchap_ctrlr_key": "ckey2", 00:29:08.368 "method": "bdev_nvme_attach_controller", 00:29:08.368 "req_id": 1 00:29:08.368 } 00:29:08.368 Got JSON-RPC error response 00:29:08.368 response: 00:29:08.368 { 00:29:08.368 "code": -5, 00:29:08.368 "message": "Input/output error" 00:29:08.368 } 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.368 rmmod nvme_tcp 00:29:08.368 rmmod nvme_fabrics 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 250654 ']' 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 250654 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 250654 ']' 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 250654 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 250654 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 250654' 00:29:08.368 killing process with pid 250654 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 250654 00:29:08.368 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 250654 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.630 07:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:10.544 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:10.806 07:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:14.110 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:14.110 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:14.372 07:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.KW2 /tmp/spdk.key-null.kSU /tmp/spdk.key-sha256.py1 /tmp/spdk.key-sha384.gSI /tmp/spdk.key-sha512.ppE /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:14.372 07:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:17.710 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:17.710 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:17.711 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:17.711 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:17.711 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:17.711 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:17.711 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:17.711 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:17.711 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:17.711 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:17.972 00:29:17.972 real 0m58.525s 00:29:17.972 user 0m52.140s 00:29:17.972 sys 0m15.116s 00:29:17.972 07:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.972 07:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.972 ************************************ 00:29:17.972 END TEST nvmf_auth_host 00:29:17.972 ************************************ 00:29:17.972 07:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:17.972 07:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:17.972 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:17.972 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:17.972 07:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.234 ************************************ 00:29:18.234 START TEST nvmf_digest 00:29:18.234 ************************************ 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:18.234 * Looking for test storage... 00:29:18.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:18.234 07:35:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:26.384 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:26.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:26.384 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:26.384 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.384 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:26.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:29:26.385 00:29:26.385 --- 10.0.0.2 ping statistics --- 00:29:26.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.385 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:29:26.385 00:29:26.385 --- 10.0.0.1 ping statistics --- 00:29:26.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.385 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:26.385 ************************************ 00:29:26.385 START TEST nvmf_digest_clean 00:29:26.385 ************************************ 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=267284 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 267284 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 267284 ']' 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.385 07:35:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.385 [2024-07-25 07:35:32.897340] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:26.385 [2024-07-25 07:35:32.897426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.385 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.385 [2024-07-25 07:35:32.965215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.385 [2024-07-25 07:35:33.028077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.385 [2024-07-25 07:35:33.028122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.385 [2024-07-25 07:35:33.028130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.385 [2024-07-25 07:35:33.028136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.385 [2024-07-25 07:35:33.028142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.385 [2024-07-25 07:35:33.028161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.385 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.676 null0 00:29:26.676 [2024-07-25 07:35:33.762982] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.676 [2024-07-25 07:35:33.787163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=267408 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 267408 /var/tmp/bperf.sock 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 267408 ']' 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.676 07:35:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.676 [2024-07-25 07:35:33.842474] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:26.676 [2024-07-25 07:35:33.842525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid267408 ] 00:29:26.676 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.676 [2024-07-25 07:35:33.919374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.676 [2024-07-25 07:35:33.983236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.618 07:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.618 07:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:27.618 07:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:27.618 07:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:27.618 07:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:27.618 07:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.618 07:35:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.879 nvme0n1 00:29:27.879 07:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:27.879 07:35:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.140 Running I/O for 2 seconds... 00:29:30.052 00:29:30.052 Latency(us) 00:29:30.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.052 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:30.052 nvme0n1 : 2.00 20856.09 81.47 0.00 0.00 6128.58 3208.53 18131.63 00:29:30.052 =================================================================================================================== 00:29:30.052 Total : 20856.09 81.47 0.00 0.00 6128.58 3208.53 18131.63 00:29:30.052 0 00:29:30.052 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:30.053 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:30.053 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:30.053 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:30.053 | select(.opcode=="crc32c") 00:29:30.053 | "\(.module_name) \(.executed)"' 00:29:30.053 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 267408 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 267408 ']' 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 267408 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 267408 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 267408' 00:29:30.313 killing process with pid 267408 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 267408 00:29:30.313 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.313 00:29:30.313 Latency(us) 00:29:30.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.313 =================================================================================================================== 00:29:30.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 267408 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=268259 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 268259 /var/tmp/bperf.sock 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 268259 ']' 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.313 07:35:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.573 [2024-07-25 07:35:37.684935] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:30.573 [2024-07-25 07:35:37.684995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268259 ] 00:29:30.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.573 Zero copy mechanism will not be used. 00:29:30.573 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.573 [2024-07-25 07:35:37.759358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.573 [2024-07-25 07:35:37.813096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.142 07:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.142 07:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:31.142 07:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:31.142 07:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:31.142 07:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:31.403 07:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.403 07:35:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.663 nvme0n1 00:29:31.663 07:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:31.663 07:35:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.923 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.923 Zero copy mechanism will not be used. 00:29:31.923 Running I/O for 2 seconds... 00:29:33.836 00:29:33.836 Latency(us) 00:29:33.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.836 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:33.836 nvme0n1 : 2.01 1971.17 246.40 0.00 0.00 8114.13 2785.28 12342.61 00:29:33.836 =================================================================================================================== 00:29:33.836 Total : 1971.17 246.40 0.00 0.00 8114.13 2785.28 12342.61 00:29:33.836 0 00:29:33.836 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:33.836 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:33.836 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:33.836 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:33.836 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:33.836 | select(.opcode=="crc32c") 00:29:33.836 | "\(.module_name) \(.executed)"' 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 268259 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 268259 ']' 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 268259 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268259 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268259' 00:29:34.096 killing process with pid 268259 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 268259 00:29:34.096 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.096 00:29:34.096 Latency(us) 00:29:34.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.096 =================================================================================================================== 00:29:34.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 268259 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=268998 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 268998 /var/tmp/bperf.sock 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 268998 ']' 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:34.096 07:35:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:34.357 [2024-07-25 07:35:41.509144] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:34.357 [2024-07-25 07:35:41.509198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid268998 ] 00:29:34.357 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.357 [2024-07-25 07:35:41.583808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.357 [2024-07-25 07:35:41.636973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.926 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.926 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:34.926 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:34.926 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:34.926 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:35.185 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.185 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.446 nvme0n1 00:29:35.446 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:35.446 07:35:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.446 Running I/O for 2 seconds... 00:29:37.992 00:29:37.992 Latency(us) 00:29:37.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.992 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.992 nvme0n1 : 2.00 21765.79 85.02 0.00 0.00 5873.92 4696.75 19442.35 00:29:37.992 =================================================================================================================== 00:29:37.992 Total : 21765.79 85.02 0.00 0.00 5873.92 4696.75 19442.35 00:29:37.992 0 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:37.992 | select(.opcode=="crc32c") 00:29:37.992 | "\(.module_name) \(.executed)"' 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 268998 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 268998 ']' 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 268998 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 268998 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 268998' 00:29:37.992 killing process with pid 268998 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 268998 00:29:37.992 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.992 00:29:37.992 Latency(us) 00:29:37.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.992 =================================================================================================================== 00:29:37.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.992 07:35:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 268998 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=269684 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 269684 /var/tmp/bperf.sock 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 269684 ']' 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:37.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.992 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:37.992 [2024-07-25 07:35:45.158601] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:37.992 [2024-07-25 07:35:45.158658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid269684 ] 00:29:37.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:37.992 Zero copy mechanism will not be used. 00:29:37.992 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.992 [2024-07-25 07:35:45.234034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.992 [2024-07-25 07:35:45.286878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.565 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.565 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:29:38.565 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:38.565 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:38.565 07:35:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:38.826 07:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.826 07:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.088 nvme0n1 00:29:39.349 07:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:39.349 07:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.349 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:39.349 Zero copy mechanism will not be used. 00:29:39.349 Running I/O for 2 seconds... 00:29:41.265 00:29:41.265 Latency(us) 00:29:41.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.265 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:41.265 nvme0n1 : 2.01 2417.82 302.23 0.00 0.00 6606.67 5133.65 24029.87 00:29:41.265 =================================================================================================================== 00:29:41.265 Total : 2417.82 302.23 0.00 0.00 6606.67 5133.65 24029.87 00:29:41.265 0 00:29:41.265 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:41.265 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:41.265 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:41.265 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:41.265 | select(.opcode=="crc32c") 00:29:41.265 | "\(.module_name) \(.executed)"' 00:29:41.265 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 269684 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 269684 ']' 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 269684 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 269684 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 269684' 00:29:41.527 killing process with pid 269684 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 269684 00:29:41.527 Received shutdown signal, test time was about 2.000000 seconds 00:29:41.527 00:29:41.527 Latency(us) 00:29:41.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.527 =================================================================================================================== 00:29:41.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.527 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 269684 00:29:41.788 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 267284 00:29:41.788 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 267284 ']' 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 267284 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 267284 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 267284' 00:29:41.789 killing process with pid 267284 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 267284 00:29:41.789 07:35:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 267284 00:29:41.789 00:29:41.789 real 0m16.268s 00:29:41.789 user 0m32.095s 00:29:41.789 sys 0m3.121s 00:29:41.789 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.789 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:41.789 ************************************ 00:29:41.789 END TEST nvmf_digest_clean 00:29:41.789 ************************************ 00:29:41.789 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:41.789 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:41.789 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.789 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:42.050 ************************************ 00:29:42.050 START TEST nvmf_digest_error 00:29:42.050 ************************************ 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=270397 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 270397 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 270397 ']' 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:42.050 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.050 [2024-07-25 07:35:49.236308] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:42.050 [2024-07-25 07:35:49.236358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.050 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.050 [2024-07-25 07:35:49.302597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.050 [2024-07-25 07:35:49.369482] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.050 [2024-07-25 07:35:49.369518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.050 [2024-07-25 07:35:49.369526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.050 [2024-07-25 07:35:49.369532] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.050 [2024-07-25 07:35:49.369537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.050 [2024-07-25 07:35:49.369556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.995 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.995 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:42.995 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:42.995 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.995 07:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.995 [2024-07-25 07:35:50.043493] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.995 null0 00:29:42.995 [2024-07-25 07:35:50.126090] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.995 [2024-07-25 07:35:50.150293] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=270744 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 270744 /var/tmp/bperf.sock 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 270744 ']' 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:42.995 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.995 [2024-07-25 07:35:50.211969] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:42.995 [2024-07-25 07:35:50.212020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid270744 ] 00:29:42.995 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.995 [2024-07-25 07:35:50.285270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.995 [2024-07-25 07:35:50.338959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.938 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:43.938 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:43.938 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:43.938 07:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:43.938 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:43.938 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.938 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.938 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.938 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.938 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.199 nvme0n1 00:29:44.199 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:44.199 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.199 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.199 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.199 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:44.199 07:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:44.199 Running I/O for 2 seconds... 00:29:44.199 [2024-07-25 07:35:51.510958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.199 [2024-07-25 07:35:51.510989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.199 [2024-07-25 07:35:51.510998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.199 [2024-07-25 07:35:51.523343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.199 [2024-07-25 07:35:51.523364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.199 [2024-07-25 07:35:51.523371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.199 [2024-07-25 07:35:51.535527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.199 [2024-07-25 07:35:51.535545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.199 [2024-07-25 07:35:51.535552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.199 [2024-07-25 07:35:51.548233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.199 [2024-07-25 07:35:51.548251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.199 [2024-07-25 07:35:51.548259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.199 [2024-07-25 07:35:51.560416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.199 [2024-07-25 07:35:51.560435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.199 [2024-07-25 07:35:51.560442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.461 [2024-07-25 07:35:51.572794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.461 [2024-07-25 07:35:51.572813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.461 [2024-07-25 07:35:51.572820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.461 [2024-07-25 07:35:51.584916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.461 [2024-07-25 07:35:51.584934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.461 [2024-07-25 07:35:51.584941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.461 [2024-07-25 07:35:51.597043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.461 [2024-07-25 07:35:51.597064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.461 [2024-07-25 07:35:51.597071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.461 [2024-07-25 07:35:51.608785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.461 [2024-07-25 07:35:51.608803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.461 [2024-07-25 07:35:51.608810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.461 [2024-07-25 07:35:51.621283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.461 [2024-07-25 07:35:51.621301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.461 [2024-07-25 07:35:51.621308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.461 [2024-07-25 07:35:51.633862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.461 [2024-07-25 07:35:51.633880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.461 [2024-07-25 07:35:51.633886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.645843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.645861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.645868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.658130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.658148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.658155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.669885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.669903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.669910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.682192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.682214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.682221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.694459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.694477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.694484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.706924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.706942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.706949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.719285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.719302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.719309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.731860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.731877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.731884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.744618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.744636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.744643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.755771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.755788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.755795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.768323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.768340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.768346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.781255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.781272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.781279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.792343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.792361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.792368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.805194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.805216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.805226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.462 [2024-07-25 07:35:51.817531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.462 [2024-07-25 07:35:51.817549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.462 [2024-07-25 07:35:51.817556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.830074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.830092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.830099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.842123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.842141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.842148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.854826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.854843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.854850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.866928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.866945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.866951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.879114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.879131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.879138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.891545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.891562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.891569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.903981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.903998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.904005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.916103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.916124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.916131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.928245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.928263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.928269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.940096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.940113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.940119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.952378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.952395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.952402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.964432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.964449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.964456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.976621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.976638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.976645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:51.988603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:51.988620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:51.988626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:52.000765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:52.000782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:52.000789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:52.012856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:52.012873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.724 [2024-07-25 07:35:52.012879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.724 [2024-07-25 07:35:52.025213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.724 [2024-07-25 07:35:52.025230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.725 [2024-07-25 07:35:52.025236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.725 [2024-07-25 07:35:52.037794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.725 [2024-07-25 07:35:52.037812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.725 [2024-07-25 07:35:52.037818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.725 [2024-07-25 07:35:52.050052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.725 [2024-07-25 07:35:52.050068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.725 [2024-07-25 07:35:52.050075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.725 [2024-07-25 07:35:52.062258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.725 [2024-07-25 07:35:52.062275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.725 [2024-07-25 07:35:52.062281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.725 [2024-07-25 07:35:52.074326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.725 [2024-07-25 07:35:52.074343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.725 [2024-07-25 07:35:52.074350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.725 [2024-07-25 07:35:52.085692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.725 [2024-07-25 07:35:52.085709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.725 [2024-07-25 07:35:52.085715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.986 [2024-07-25 07:35:52.097962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.097979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.097986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.110319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.110336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.123359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.123376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.123386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.135790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.135807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.135814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.147386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.147403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.147409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.159695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.159711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.159718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.171822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.171839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.171846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.183825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.183842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.183849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.195872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.195888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.195895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.208032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.208048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.208055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.220483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.220500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.220507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.233121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.233141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.233147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.245103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.245120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.245127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.257011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.257028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.257034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.269304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.269321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.269328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.281498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.281515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.281522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.293555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.293572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.293578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.306013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.306030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.306036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.318629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.318645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.318652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.330770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.330787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.330793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.987 [2024-07-25 07:35:52.342845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:44.987 [2024-07-25 07:35:52.342862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.987 [2024-07-25 07:35:52.342869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.249 [2024-07-25 07:35:52.354363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.249 [2024-07-25 07:35:52.354380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.249 [2024-07-25 07:35:52.354388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.249 [2024-07-25 07:35:52.366979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.249 [2024-07-25 07:35:52.366996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.249 [2024-07-25 07:35:52.367003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.249 [2024-07-25 07:35:52.380674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.249 [2024-07-25 07:35:52.380691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.249 [2024-07-25 07:35:52.380697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.249 [2024-07-25 07:35:52.391774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.249 [2024-07-25 07:35:52.391791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.249 [2024-07-25 07:35:52.391798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.249 [2024-07-25 07:35:52.404011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.249 [2024-07-25 07:35:52.404028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.249 [2024-07-25 07:35:52.404034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.249 [2024-07-25 07:35:52.416015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.249 [2024-07-25 07:35:52.416032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.249 [2024-07-25 07:35:52.416038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.249 [2024-07-25 07:35:52.428093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.428110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.428117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.440743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.440764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.440770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.453232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.453249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.453257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.465272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.465289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.465296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.477083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.477100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.477107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.489795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.489811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.489818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.501440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.501457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.501464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.514263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.514280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.514286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.526374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.526391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.526398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.539342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.539359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.539366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.550859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.550876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.550883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.562468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.562485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.562492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.574871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.574888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.574894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.587798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.587815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.587822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.600012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.600029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.600035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.250 [2024-07-25 07:35:52.612254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.250 [2024-07-25 07:35:52.612270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.250 [2024-07-25 07:35:52.612277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.624559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.624576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.512 [2024-07-25 07:35:52.624583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.636417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.636435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.512 [2024-07-25 07:35:52.636442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.648456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.648473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.512 [2024-07-25 07:35:52.648483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.661318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.661336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.512 [2024-07-25 07:35:52.661342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.672869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.672886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.512 [2024-07-25 07:35:52.672893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.685204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.685221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.512 [2024-07-25 07:35:52.685228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.697160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.697177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.512 [2024-07-25 07:35:52.697184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.512 [2024-07-25 07:35:52.709305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.512 [2024-07-25 07:35:52.709322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.709329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.722206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.722224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.722231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.734338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.734355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.734362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.746233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.746250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.746256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.758039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.758059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.758066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.770160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.770177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.770184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.782656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.782672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.782679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.795424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.795441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.795447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.807902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.807918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.807925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.820317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.820334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.820341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.833015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.833033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.833039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.845380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.845397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.845403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.857251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.857268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.857275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.513 [2024-07-25 07:35:52.869192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.513 [2024-07-25 07:35:52.869212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.513 [2024-07-25 07:35:52.869219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.882090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.882108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.882115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.893497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.893515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.893521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.905568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.905586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.905592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.918323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.918341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.918348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.929670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.929688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.929695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.942945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.942963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.942970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.956394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.956410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.956417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.966934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.966951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.966961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.979261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.979279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.979286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:52.992647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:52.992664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:52.992671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.004261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.004278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.004285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.016365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.016382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.016389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.029297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.029314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.029321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.040841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.040858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.040865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.052724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.052742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.052748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.065949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.065966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.065972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.077786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.077806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.077813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.089787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.089805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.089811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.102124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.102142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.102149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.114666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.779 [2024-07-25 07:35:53.114683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.779 [2024-07-25 07:35:53.114690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.779 [2024-07-25 07:35:53.126581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.780 [2024-07-25 07:35:53.126599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-25 07:35:53.126606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.780 [2024-07-25 07:35:53.139963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:45.780 [2024-07-25 07:35:53.139981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-25 07:35:53.139987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.152393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.152410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.152417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.164123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.164140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.164148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.176024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.176042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.176049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.188531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.188548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.188555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.201094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.201111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.201118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.213813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.213830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.213837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.224207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.224224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.224231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.236564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.236581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.236588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.250000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.250017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.250024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.261473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.261491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.261497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.274548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.274566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.274573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.286768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.286786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.286799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.298598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.298616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.298623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.310544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.310562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.310569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.323137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.323155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.323161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.336415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.336432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.336439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.348519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.348536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.348543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.360629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.360646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.360653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.372348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.372366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.372373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.383675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.383693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.383699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.396660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.396678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.396685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.409973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.409990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.409997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.422062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.074 [2024-07-25 07:35:53.422079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.074 [2024-07-25 07:35:53.422086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.074 [2024-07-25 07:35:53.433817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.075 [2024-07-25 07:35:53.433834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.075 [2024-07-25 07:35:53.433841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.336 [2024-07-25 07:35:53.446350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.336 [2024-07-25 07:35:53.446367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.336 [2024-07-25 07:35:53.446374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.336 [2024-07-25 07:35:53.457535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.336 [2024-07-25 07:35:53.457552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.336 [2024-07-25 07:35:53.457560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.336 [2024-07-25 07:35:53.470046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.336 [2024-07-25 07:35:53.470063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.336 [2024-07-25 07:35:53.470070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.336 [2024-07-25 07:35:53.482775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.336 [2024-07-25 07:35:53.482792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.336 [2024-07-25 07:35:53.482799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.336 [2024-07-25 07:35:53.494941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77fa30) 00:29:46.336 [2024-07-25 07:35:53.494959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.336 [2024-07-25 07:35:53.494969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.336 00:29:46.336 Latency(us) 00:29:46.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.336 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:46.336 nvme0n1 : 2.00 20780.04 81.17 0.00 0.00 6151.53 3850.24 17694.72 00:29:46.336 =================================================================================================================== 00:29:46.336 Total : 20780.04 81.17 0.00 0.00 6151.53 3850.24 17694.72 00:29:46.336 0 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:46.336 | .driver_specific 00:29:46.336 | .nvme_error 00:29:46.336 | .status_code 00:29:46.336 | .command_transient_transport_error' 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 270744 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 270744 ']' 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 270744 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:46.336 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 270744 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 270744' 00:29:46.598 killing process with pid 270744 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 270744 00:29:46.598 Received shutdown signal, test time was about 2.000000 seconds 00:29:46.598 00:29:46.598 Latency(us) 00:29:46.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.598 =================================================================================================================== 00:29:46.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 270744 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=271429 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 271429 /var/tmp/bperf.sock 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 271429 ']' 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:46.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:46.598 07:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.598 [2024-07-25 07:35:53.898173] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:46.598 [2024-07-25 07:35:53.898234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid271429 ] 00:29:46.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:46.598 Zero copy mechanism will not be used. 00:29:46.598 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.859 [2024-07-25 07:35:53.972410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.859 [2024-07-25 07:35:54.025345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.430 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.430 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:47.430 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:47.430 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:47.691 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:47.691 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.691 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.691 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.691 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.691 07:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.953 nvme0n1 00:29:47.953 07:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:47.953 07:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.953 07:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.953 07:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.953 07:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:47.953 07:35:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:47.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:47.953 Zero copy mechanism will not be used. 00:29:47.953 Running I/O for 2 seconds... 00:29:47.953 [2024-07-25 07:35:55.219510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:47.953 [2024-07-25 07:35:55.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.953 [2024-07-25 07:35:55.219550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.953 [2024-07-25 07:35:55.235910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:47.953 [2024-07-25 07:35:55.235932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.953 [2024-07-25 07:35:55.235939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.953 [2024-07-25 07:35:55.252387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:47.953 [2024-07-25 07:35:55.252405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.953 [2024-07-25 07:35:55.252412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.953 [2024-07-25 07:35:55.269204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:47.953 [2024-07-25 07:35:55.269224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.953 [2024-07-25 07:35:55.269231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.953 [2024-07-25 07:35:55.284943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:47.953 [2024-07-25 07:35:55.284961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.953 [2024-07-25 07:35:55.284968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.953 [2024-07-25 07:35:55.300798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:47.953 [2024-07-25 07:35:55.300817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.953 [2024-07-25 07:35:55.300824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.953 [2024-07-25 07:35:55.317957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:47.953 [2024-07-25 07:35:55.317976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.953 [2024-07-25 07:35:55.317983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.336845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.336865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.336871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.352581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.352604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.352610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.370161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.370179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.370186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.385652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.385670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.385676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.402685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.402703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.402710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.420830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.420848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.420855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.438976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.438994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.439000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.457845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.457863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.457870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.473672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.473690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.473696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.489703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.489721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.489728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.507212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.507230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.507237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.523292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.523310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.523316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.540080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.540098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.540104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.555797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.555815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.555822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.215 [2024-07-25 07:35:55.574453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.215 [2024-07-25 07:35:55.574470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.215 [2024-07-25 07:35:55.574476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.589878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.589896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.589902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.608011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.608029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.608035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.624215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.624232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.624239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.639745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.639764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.639773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.656079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.656097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.656104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.672463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.672488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.688355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.688373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.688380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.705993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.706011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.706018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.718669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.718687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.718694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.735702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.735721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.735727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.752364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.752383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.752389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.770658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.770678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.770684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.787563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.787589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.477 [2024-07-25 07:35:55.804418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.477 [2024-07-25 07:35:55.804436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.477 [2024-07-25 07:35:55.804443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.478 [2024-07-25 07:35:55.821595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.478 [2024-07-25 07:35:55.821614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.478 [2024-07-25 07:35:55.821621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.478 [2024-07-25 07:35:55.837507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.478 [2024-07-25 07:35:55.837526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.478 [2024-07-25 07:35:55.837532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.853637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.853656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.853663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.868023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.868042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.868049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.883249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.883268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.883274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.900695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.900714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.900720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.918111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.918130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.918139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.936050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.936069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.936075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.950365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.950384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.950391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.965297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.965316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.965322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.981670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.981688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.981694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:55.997521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.739 [2024-07-25 07:35:55.997541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.739 [2024-07-25 07:35:55.997547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.739 [2024-07-25 07:35:56.013806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.740 [2024-07-25 07:35:56.013824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.740 [2024-07-25 07:35:56.013831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.740 [2024-07-25 07:35:56.029713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.740 [2024-07-25 07:35:56.029732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.740 [2024-07-25 07:35:56.029739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:48.740 [2024-07-25 07:35:56.046866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.740 [2024-07-25 07:35:56.046884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.740 [2024-07-25 07:35:56.046890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:48.740 [2024-07-25 07:35:56.062605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.740 [2024-07-25 07:35:56.062627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.740 [2024-07-25 07:35:56.062633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.740 [2024-07-25 07:35:56.079048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.740 [2024-07-25 07:35:56.079067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.740 [2024-07-25 07:35:56.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:48.740 [2024-07-25 07:35:56.095156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:48.740 [2024-07-25 07:35:56.095175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.740 [2024-07-25 07:35:56.095181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.001 [2024-07-25 07:35:56.111402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.001 [2024-07-25 07:35:56.111420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.001 [2024-07-25 07:35:56.111427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.001 [2024-07-25 07:35:56.127472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.001 [2024-07-25 07:35:56.127491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.001 [2024-07-25 07:35:56.127497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.001 [2024-07-25 07:35:56.142080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.001 [2024-07-25 07:35:56.142099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.001 [2024-07-25 07:35:56.142105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.001 [2024-07-25 07:35:56.160207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.001 [2024-07-25 07:35:56.160225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.001 [2024-07-25 07:35:56.160232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.001 [2024-07-25 07:35:56.176022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.001 [2024-07-25 07:35:56.176040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.001 [2024-07-25 07:35:56.176047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.001 [2024-07-25 07:35:56.191479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.001 [2024-07-25 07:35:56.191498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.001 [2024-07-25 07:35:56.191504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.001 [2024-07-25 07:35:56.208069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.208087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.208094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.224479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.224498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.224504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.241341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.241361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.241367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.258146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.258164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.258170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.274731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.274749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.274755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.289711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.289730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.289737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.307888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.307907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.307913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.324856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.324874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.324881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.342141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.342160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.342169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.002 [2024-07-25 07:35:56.357923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.002 [2024-07-25 07:35:56.357941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.002 [2024-07-25 07:35:56.357948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.375837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.375856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.375863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.392534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.392553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.392560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.408518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.408537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.408544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.424388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.424408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.424414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.439936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.439956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.439964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.457938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.457957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.457964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.475418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.475437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.475444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.489720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.489742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.489748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.505700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.505719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.505725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.522121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.522140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.522146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.538282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.538300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.538307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.555096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.555114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.555121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.572633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.572652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.572659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.588282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.588301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.588307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.604106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.604126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.604132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.264 [2024-07-25 07:35:56.620421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.264 [2024-07-25 07:35:56.620440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.264 [2024-07-25 07:35:56.620447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.636787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.636806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.636813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.652357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.652376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.652383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.667952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.667972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.667978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.685487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.685506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.685513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.702509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.702527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.702534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.718962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.718988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.735173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.735192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.735198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.751428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.751447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.751454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.767341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.767359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.767368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.783364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.783383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.783390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.798288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.798308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.798315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.817390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.817408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.817415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.832048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.832067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.832073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.526 [2024-07-25 07:35:56.847885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.526 [2024-07-25 07:35:56.847903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.526 [2024-07-25 07:35:56.847910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.527 [2024-07-25 07:35:56.863428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.527 [2024-07-25 07:35:56.863447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.527 [2024-07-25 07:35:56.863453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.527 [2024-07-25 07:35:56.883193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.527 [2024-07-25 07:35:56.883217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.527 [2024-07-25 07:35:56.883223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:56.899213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:56.899233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:56.899240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:56.913762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:56.913784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:56.913790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:56.930712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:56.930731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:56.930738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:56.949459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:56.949478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:56.949484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:56.965875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:56.965895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:56.965901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:56.983557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:56.983576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:56.983582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:56.998145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:56.998164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:56.998170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:57.014512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:57.014531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:57.014537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:57.030808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:57.030827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:57.030833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:57.045937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.788 [2024-07-25 07:35:57.045956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.788 [2024-07-25 07:35:57.045962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.788 [2024-07-25 07:35:57.062279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.789 [2024-07-25 07:35:57.062298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.789 [2024-07-25 07:35:57.062304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.789 [2024-07-25 07:35:57.079279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.789 [2024-07-25 07:35:57.079298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.789 [2024-07-25 07:35:57.079305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.789 [2024-07-25 07:35:57.095880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.789 [2024-07-25 07:35:57.095900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.789 [2024-07-25 07:35:57.095906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.789 [2024-07-25 07:35:57.112355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.789 [2024-07-25 07:35:57.112374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.789 [2024-07-25 07:35:57.112381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.789 [2024-07-25 07:35:57.129163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.789 [2024-07-25 07:35:57.129182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.789 [2024-07-25 07:35:57.129188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.789 [2024-07-25 07:35:57.144111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:49.789 [2024-07-25 07:35:57.144130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.789 [2024-07-25 07:35:57.144137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.050 [2024-07-25 07:35:57.158069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:50.050 [2024-07-25 07:35:57.158088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.050 [2024-07-25 07:35:57.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.050 [2024-07-25 07:35:57.170974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:50.050 [2024-07-25 07:35:57.170993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.050 [2024-07-25 07:35:57.170999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.050 [2024-07-25 07:35:57.184624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:50.050 [2024-07-25 07:35:57.184643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.050 [2024-07-25 07:35:57.184653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.050 [2024-07-25 07:35:57.199374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2aa30) 00:29:50.050 [2024-07-25 07:35:57.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.050 [2024-07-25 07:35:57.199399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.050 00:29:50.050 Latency(us) 00:29:50.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.050 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:50.050 nvme0n1 : 2.04 1853.72 231.72 0.00 0.00 8461.59 2143.57 50025.81 00:29:50.050 =================================================================================================================== 00:29:50.050 Total : 1853.72 231.72 0.00 0.00 8461.59 2143.57 50025.81 00:29:50.051 0 00:29:50.051 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:50.051 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:50.051 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:50.051 | .driver_specific 00:29:50.051 | .nvme_error 00:29:50.051 | .status_code 00:29:50.051 | .command_transient_transport_error' 00:29:50.051 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 271429 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 271429 ']' 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 271429 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 271429 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 271429' 00:29:50.312 killing process with pid 271429 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 271429 00:29:50.312 Received shutdown signal, test time was about 2.000000 seconds 00:29:50.312 00:29:50.312 Latency(us) 00:29:50.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.312 =================================================================================================================== 00:29:50.312 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 271429 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:50.312 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=272127 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 272127 /var/tmp/bperf.sock 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 272127 ']' 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:50.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:50.313 07:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.313 [2024-07-25 07:35:57.652226] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:50.313 [2024-07-25 07:35:57.652300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272127 ] 00:29:50.313 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.574 [2024-07-25 07:35:57.727794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.574 [2024-07-25 07:35:57.780584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.146 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:51.146 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:51.146 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:51.146 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:51.407 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:51.407 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.407 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.407 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.407 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.407 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.668 nvme0n1 00:29:51.668 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:51.668 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.668 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.668 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.668 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:51.668 07:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:51.668 Running I/O for 2 seconds... 00:29:51.930 [2024-07-25 07:35:59.039727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.040596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.040624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.051988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.052438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.064453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.064744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.064761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.076634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.076922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.076939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.088825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.089161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.089178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.100980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.101282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.101298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.113163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.113576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.113592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.125232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.125669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.125689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.137454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.137879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.137895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.149672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.150094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.150110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.161820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.162218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.162234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.173927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.174397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.174413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.186034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.186504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.186520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.930 [2024-07-25 07:35:59.198144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.930 [2024-07-25 07:35:59.198565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.930 [2024-07-25 07:35:59.198580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.210231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.210604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.210619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.222306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.222738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.222754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.234512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.234844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.234860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.246734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.247168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.247184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.258794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.259106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.259121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.270967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.271407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.271423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.283078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.283573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.283589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.931 [2024-07-25 07:35:59.295120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:51.931 [2024-07-25 07:35:59.295532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.931 [2024-07-25 07:35:59.295549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.307282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.307565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.307580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.319366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.319769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.319785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.331555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.331971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.331986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.343645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.343968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.343984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.355738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.356146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.356161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.367842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.368178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.368193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.379961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.380422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.380438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.392068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.392356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.392372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.404172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.404484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.404500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.416294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.416601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.416617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.428449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.428753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.428769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.440532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.440824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.440842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.452671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.453100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.453116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.464834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.465241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.465258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.476874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.477291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.200 [2024-07-25 07:35:59.477307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.200 [2024-07-25 07:35:59.489024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.200 [2024-07-25 07:35:59.489426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.201 [2024-07-25 07:35:59.489442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.201 [2024-07-25 07:35:59.501140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.201 [2024-07-25 07:35:59.501441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.201 [2024-07-25 07:35:59.501457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.201 [2024-07-25 07:35:59.513221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.201 [2024-07-25 07:35:59.513611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.201 [2024-07-25 07:35:59.513627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.201 [2024-07-25 07:35:59.525373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.201 [2024-07-25 07:35:59.525666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.201 [2024-07-25 07:35:59.525683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.201 [2024-07-25 07:35:59.537471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.201 [2024-07-25 07:35:59.537755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.201 [2024-07-25 07:35:59.537771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.201 [2024-07-25 07:35:59.549600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.201 [2024-07-25 07:35:59.550091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.201 [2024-07-25 07:35:59.550106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.201 [2024-07-25 07:35:59.561770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.201 [2024-07-25 07:35:59.562270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.201 [2024-07-25 07:35:59.562287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.574005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.574301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.574316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.586105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.586497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.586514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.598236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.598542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.598558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.610321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.610742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.610758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.622446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.622754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.622770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.634547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.634829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.634845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.646703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.646998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.647013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.658816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.659208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.659224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.670913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.671294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.671310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.683034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.683445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.683461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.695156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.695568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.695584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.707229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.707661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.707677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.719359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.719744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.719760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.731538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.731938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.731954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.743627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.744093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.744109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.755753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.756183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.756204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.767882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.768277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.768294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.780002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.780416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.780432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.792126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.792534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.792550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.804276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.804718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.804734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.816444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.816860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.816876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.465 [2024-07-25 07:35:59.828632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.465 [2024-07-25 07:35:59.828921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.465 [2024-07-25 07:35:59.828938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.840743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.841060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.841076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.852861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.853145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.853161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.864996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.865430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.865446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.877112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.877580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.877596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.889264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.889683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.901351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.901752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.901767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.913488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.913770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.913786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.727 [2024-07-25 07:35:59.925630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.727 [2024-07-25 07:35:59.925962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.727 [2024-07-25 07:35:59.925978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:35:59.937756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:35:59.938156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:35:59.938172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:35:59.949835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:35:59.950232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:35:59.950248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:35:59.961956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:35:59.962353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:35:59.962369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:35:59.974089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:35:59.974514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:35:59.974530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:35:59.986271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:35:59.986568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:35:59.986584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:35:59.998512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:35:59.998931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:35:59.998947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:36:00.011273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:36:00.011688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:36:00.011705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:36:00.024700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:36:00.025123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:36:00.025139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:36:00.036862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:36:00.037129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:36:00.037145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:36:00.048982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:36:00.049315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:36:00.049330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:36:00.061366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:36:00.061804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:36:00.061821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:36:00.073481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:36:00.073844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:36:00.073862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.728 [2024-07-25 07:36:00.086279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.728 [2024-07-25 07:36:00.086648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.728 [2024-07-25 07:36:00.086664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.098482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.098763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.098779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.110625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.111061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.111078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.122755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.123167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.123183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.134879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.135311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.135327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.147050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.147331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.147348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.159165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.159573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.159589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.171417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.171789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.171805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.183559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.183863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.183880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.195709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.990 [2024-07-25 07:36:00.196134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.990 [2024-07-25 07:36:00.196150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.990 [2024-07-25 07:36:00.207820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.208211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.208227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.219930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.220369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.220385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.232040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.232433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.232449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.244185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.244580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.244597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.256461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.256744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.256760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.268559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.268847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.268863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.280699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.281101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.281117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.292815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.293315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.293331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.304978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.305367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.305384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.317080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.317491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.317507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.329224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.329652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.329668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.341351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.341700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.341716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:52.991 [2024-07-25 07:36:00.353466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:52.991 [2024-07-25 07:36:00.353954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:52.991 [2024-07-25 07:36:00.353969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.253 [2024-07-25 07:36:00.365568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.253 [2024-07-25 07:36:00.365853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.253 [2024-07-25 07:36:00.365869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.253 [2024-07-25 07:36:00.377704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.253 [2024-07-25 07:36:00.378096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.253 [2024-07-25 07:36:00.378112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.253 [2024-07-25 07:36:00.389787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.253 [2024-07-25 07:36:00.390184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.253 [2024-07-25 07:36:00.390204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.253 [2024-07-25 07:36:00.402131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.253 [2024-07-25 07:36:00.402636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.253 [2024-07-25 07:36:00.402652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.253 [2024-07-25 07:36:00.414222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.253 [2024-07-25 07:36:00.414642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.253 [2024-07-25 07:36:00.414657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.253 [2024-07-25 07:36:00.426359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.253 [2024-07-25 07:36:00.426807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.253 [2024-07-25 07:36:00.426822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.253 [2024-07-25 07:36:00.438501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.253 [2024-07-25 07:36:00.438824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.438840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.450626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.451084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.462755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.463166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.463182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.474943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.475347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.475363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.487015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.487435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.487451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.499183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.499631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.499649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.511290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.511605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.511621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.523397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.523837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.523853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.535609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.535906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.535922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.547716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.548173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.548189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.559843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.560300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.560316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.572024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.572430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.572447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.584164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.584587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.584603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.596308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.596689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.596705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.608445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.254 [2024-07-25 07:36:00.608736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.254 [2024-07-25 07:36:00.608753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.254 [2024-07-25 07:36:00.620641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.621040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.621057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.632743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.633203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.633219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.644884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.645187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.645207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.657024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.657306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.657323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.669149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.669557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.669573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.681266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.681700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.681716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.693402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.693800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.693816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.705538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.705835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.705850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.717687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.718031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.718047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.729862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.730194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.516 [2024-07-25 07:36:00.730214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.516 [2024-07-25 07:36:00.741990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.516 [2024-07-25 07:36:00.742412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.742427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.754136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.754567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.754583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.766258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.766663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.766678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.778400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.778844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.778859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.790573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.790860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.790876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.802684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.803090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.803105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.814808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.815218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.815237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.826930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.827322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.827338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.839032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.839324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.839340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.851175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.851600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.851617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.863316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.863588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.863605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.517 [2024-07-25 07:36:00.875432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.517 [2024-07-25 07:36:00.875822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.517 [2024-07-25 07:36:00.875837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.887577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.887978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.887994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.899678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.900077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.900092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.911808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.912092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.912107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.923927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.924324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.924339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.936108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.936568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.936583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.948216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.948565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.948582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.960387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.960807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.960823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.972491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.972790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.972805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.984603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.985030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.985046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:00.996693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:00.997073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:00.997088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:01.008845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:01.009139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:01.009155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 [2024-07-25 07:36:01.020936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112dd20) with pdu=0x2000190fe2e8 00:29:53.779 [2024-07-25 07:36:01.021366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.779 [2024-07-25 07:36:01.021381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:53.779 00:29:53.779 Latency(us) 00:29:53.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.779 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.779 nvme0n1 : 2.01 20883.84 81.58 0.00 0.00 6116.95 5270.19 19770.03 00:29:53.779 =================================================================================================================== 00:29:53.779 Total : 20883.84 81.58 0.00 0.00 6116.95 5270.19 19770.03 00:29:53.779 0 00:29:53.779 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:53.780 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:53.780 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:53.780 | .driver_specific 00:29:53.780 | .nvme_error 00:29:53.780 | .status_code 00:29:53.780 | .command_transient_transport_error' 00:29:53.780 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 272127 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 272127 ']' 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 272127 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 272127 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 272127' 00:29:54.041 killing process with pid 272127 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 272127 00:29:54.041 Received shutdown signal, test time was about 2.000000 seconds 00:29:54.041 00:29:54.041 Latency(us) 00:29:54.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.041 =================================================================================================================== 00:29:54.041 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 272127 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=272888 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 272888 /var/tmp/bperf.sock 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 272888 ']' 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.041 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:54.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:54.042 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.042 07:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.302 [2024-07-25 07:36:01.424990] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:29:54.302 [2024-07-25 07:36:01.425046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272888 ] 00:29:54.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:54.302 Zero copy mechanism will not be used. 00:29:54.302 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.302 [2024-07-25 07:36:01.498064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.302 [2024-07-25 07:36:01.550869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.874 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.874 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:54.874 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:54.874 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:55.135 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:55.135 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.135 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.135 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.135 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:55.135 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:55.394 nvme0n1 00:29:55.394 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:55.394 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.394 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.394 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.394 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:55.394 07:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:55.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:55.395 Zero copy mechanism will not be used. 00:29:55.395 Running I/O for 2 seconds... 00:29:55.656 [2024-07-25 07:36:02.775957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.776254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.776281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.791416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.791732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.791754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.806786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.807087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.807106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.821893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.822194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.822218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.835889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.836270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.836289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.850507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.850880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.850900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.864065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.864342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.864361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.879083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.879348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.879367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.893414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.893696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.893721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.908524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.908846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.908864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.924132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.924394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.924412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.939182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.939450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.939469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.952051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.952315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.952334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.966974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.967237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.967255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.980727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.981042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.981061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:02.994081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:02.994344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:02.994371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:03.007493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:03.007778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:03.007797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.656 [2024-07-25 07:36:03.020275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.656 [2024-07-25 07:36:03.020566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.656 [2024-07-25 07:36:03.020585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.034439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.034700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.034718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.048142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.048402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.048421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.063808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.064160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.064178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.078623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.078909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.078928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.091776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.092037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.092056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.107530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.107826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.107844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.121697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.121967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.121986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.135748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.136116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.136137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.149627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.149921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.149939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.163056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.163318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.163338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.177599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.177859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.177877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.192006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.192285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.192304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.206929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.207189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.207211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.221878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.222180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.222197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.237652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.237996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.238015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.252818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.253121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.253139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.267907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.268171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.268189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.918 [2024-07-25 07:36:03.282370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:55.918 [2024-07-25 07:36:03.282807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.918 [2024-07-25 07:36:03.282825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.296134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.296488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.296505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.310671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.310931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.310949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.325376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.325841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.325862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.339921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.340180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.340199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.353369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.353613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.353631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.367224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.367540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.367558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.381968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.382232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.382250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.396206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.396466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.396485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.409435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.409633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.409650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.422225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.422487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.422506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.434781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.435040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.435059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.447161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.447423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.447441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.461713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.462029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.462047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.475669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.475928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.475947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.488834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.489093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.489111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.501363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.501624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.501646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.515461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.515722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.515739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.528544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.528802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.528821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.187 [2024-07-25 07:36:03.543888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.187 [2024-07-25 07:36:03.544148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.187 [2024-07-25 07:36:03.544167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.451 [2024-07-25 07:36:03.556882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.451 [2024-07-25 07:36:03.557140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.451 [2024-07-25 07:36:03.557158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.451 [2024-07-25 07:36:03.570361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.451 [2024-07-25 07:36:03.570723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.570743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.583722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.583983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.584002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.597849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.598108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.598126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.611444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.611802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.611822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.624542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.624805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.624824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.637567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.638029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.638047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.652668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.652930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.652949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.666158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.666419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.666438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.680221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.680482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.680500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.692886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.693080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.693100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.706678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.706937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.706955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.719865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.720124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.720143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.733609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.734013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.734035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.748220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.748481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.748499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.762505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.762765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.762784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.776904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.777165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.777183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.791596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.791857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.791876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.452 [2024-07-25 07:36:03.805790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.452 [2024-07-25 07:36:03.806086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.452 [2024-07-25 07:36:03.806105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.819555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.819794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.819811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.834129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.834399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.834417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.847588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.847849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.847867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.861657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.861957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.861979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.876540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.876815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.876834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.890080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.890375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.902973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.903248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.903268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.916118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.916381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.916400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.930403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.930762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.930781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.945448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.945703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.945721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.958910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.959170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.959190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.972443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.972703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.972722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.985087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.985355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.985373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:03.998716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:03.998976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:03.998996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:04.013647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:04.013907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:04.013926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:04.029313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:04.029664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:04.029682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:04.044302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:04.044564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:04.044583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:04.058569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:04.058876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:04.058894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.714 [2024-07-25 07:36:04.072265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.714 [2024-07-25 07:36:04.072526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.714 [2024-07-25 07:36:04.072543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.976 [2024-07-25 07:36:04.087001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.976 [2024-07-25 07:36:04.087376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.976 [2024-07-25 07:36:04.087395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.976 [2024-07-25 07:36:04.103158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.976 [2024-07-25 07:36:04.103549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.103568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.117975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.118239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.118258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.133555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.133884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.133902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.148214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.148576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.148595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.162610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.162978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.162997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.176361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.176748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.176767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.190163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.190482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.190501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.204746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.205004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.205022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.218701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.218958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.218977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.233161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.233422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.233445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.246745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.247050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.247072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.260296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.260540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.260558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.275835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.276095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.276114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.291128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.291543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.291565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.305691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.305983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.306001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.319953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.320235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.320254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.977 [2024-07-25 07:36:04.335586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:56.977 [2024-07-25 07:36:04.335847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.977 [2024-07-25 07:36:04.335864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.238 [2024-07-25 07:36:04.351591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.238 [2024-07-25 07:36:04.351961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.238 [2024-07-25 07:36:04.351980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.366262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.366520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.366539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.381526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.381919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.381937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.397219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.397493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.397512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.412934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.413194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.413219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.428073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.428384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.442256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.442517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.442536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.456900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.457251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.457269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.470671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.471082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.471101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.484630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.484987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.485010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.499108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.499366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.499385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.515179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.515519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.515538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.529461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.529719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.529739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.544135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.544395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.544414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.558029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.558330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.558348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.573753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.574013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.574032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.587628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.587944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.587967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.239 [2024-07-25 07:36:04.601961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.239 [2024-07-25 07:36:04.602158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.239 [2024-07-25 07:36:04.602174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.500 [2024-07-25 07:36:04.616522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.500 [2024-07-25 07:36:04.616845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.500 [2024-07-25 07:36:04.616864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.500 [2024-07-25 07:36:04.630515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.500 [2024-07-25 07:36:04.630789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.500 [2024-07-25 07:36:04.630809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.500 [2024-07-25 07:36:04.645102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.500 [2024-07-25 07:36:04.645367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.500 [2024-07-25 07:36:04.645386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.500 [2024-07-25 07:36:04.658781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.500 [2024-07-25 07:36:04.659040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.500 [2024-07-25 07:36:04.659059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.500 [2024-07-25 07:36:04.672159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.500 [2024-07-25 07:36:04.672418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.500 [2024-07-25 07:36:04.672437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.500 [2024-07-25 07:36:04.685850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.500 [2024-07-25 07:36:04.686108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.500 [2024-07-25 07:36:04.686127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.501 [2024-07-25 07:36:04.699059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.501 [2024-07-25 07:36:04.699323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.501 [2024-07-25 07:36:04.699342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.501 [2024-07-25 07:36:04.712371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.501 [2024-07-25 07:36:04.712631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.501 [2024-07-25 07:36:04.712650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.501 [2024-07-25 07:36:04.726224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.501 [2024-07-25 07:36:04.726571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.501 [2024-07-25 07:36:04.726589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.501 [2024-07-25 07:36:04.741920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.501 [2024-07-25 07:36:04.742183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.501 [2024-07-25 07:36:04.742208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.501 [2024-07-25 07:36:04.754863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.501 [2024-07-25 07:36:04.755127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.501 [2024-07-25 07:36:04.755147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.501 [2024-07-25 07:36:04.768036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x112e060) with pdu=0x2000190fef90 00:29:57.501 [2024-07-25 07:36:04.768434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.501 [2024-07-25 07:36:04.768452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.501 00:29:57.501 Latency(us) 00:29:57.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.501 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:57.501 nvme0n1 : 2.01 2178.83 272.35 0.00 0.00 7327.42 4177.92 17039.36 00:29:57.501 =================================================================================================================== 00:29:57.501 Total : 2178.83 272.35 0.00 0.00 7327.42 4177.92 17039.36 00:29:57.501 0 00:29:57.501 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:57.501 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:57.501 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:57.501 | .driver_specific 00:29:57.501 | .nvme_error 00:29:57.501 | .status_code 00:29:57.501 | .command_transient_transport_error' 00:29:57.501 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:57.762 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:29:57.762 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 272888 00:29:57.762 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 272888 ']' 00:29:57.762 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 272888 00:29:57.762 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:57.762 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:57.762 07:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 272888 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 272888' 00:29:57.762 killing process with pid 272888 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 272888 00:29:57.762 Received shutdown signal, test time was about 2.000000 seconds 00:29:57.762 00:29:57.762 Latency(us) 00:29:57.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.762 =================================================================================================================== 00:29:57.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 272888 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 270397 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 270397 ']' 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 270397 00:29:57.762 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 270397 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 270397' 00:29:58.023 killing process with pid 270397 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 270397 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 270397 00:29:58.023 00:29:58.023 real 0m16.136s 00:29:58.023 user 0m31.814s 00:29:58.023 sys 0m3.118s 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.023 ************************************ 00:29:58.023 END TEST nvmf_digest_error 00:29:58.023 ************************************ 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:58.023 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:58.023 rmmod nvme_tcp 00:29:58.023 rmmod nvme_fabrics 00:29:58.285 rmmod nvme_keyring 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 270397 ']' 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 270397 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 270397 ']' 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 270397 00:29:58.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (270397) - No such process 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 270397 is not found' 00:29:58.285 Process with pid 270397 is not found 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.285 07:36:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.199 00:30:00.199 real 0m42.150s 00:30:00.199 user 1m6.048s 00:30:00.199 sys 0m11.777s 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:00.199 ************************************ 00:30:00.199 END TEST nvmf_digest 00:30:00.199 ************************************ 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:00.199 07:36:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.460 ************************************ 00:30:00.460 START TEST nvmf_bdevperf 00:30:00.460 ************************************ 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:00.460 * Looking for test storage... 00:30:00.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:00.460 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.461 07:36:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:08.658 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:08.658 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:08.658 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:08.658 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:08.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:30:08.658 00:30:08.658 --- 10.0.0.2 ping statistics --- 00:30:08.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.658 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:30:08.658 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:30:08.658 00:30:08.658 --- 10.0.0.1 ping statistics --- 00:30:08.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.659 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=278373 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 278373 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 278373 ']' 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:08.659 07:36:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 [2024-07-25 07:36:15.019900] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:30:08.659 [2024-07-25 07:36:15.019968] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.659 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.659 [2024-07-25 07:36:15.114766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:08.659 [2024-07-25 07:36:15.209145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.659 [2024-07-25 07:36:15.209216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.659 [2024-07-25 07:36:15.209225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.659 [2024-07-25 07:36:15.209232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.659 [2024-07-25 07:36:15.209239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.659 [2024-07-25 07:36:15.209371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.659 [2024-07-25 07:36:15.209713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.659 [2024-07-25 07:36:15.209714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 [2024-07-25 07:36:15.855205] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 Malloc0 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.659 [2024-07-25 07:36:15.925160] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.659 { 00:30:08.659 "params": { 00:30:08.659 "name": "Nvme$subsystem", 00:30:08.659 "trtype": "$TEST_TRANSPORT", 00:30:08.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.659 "adrfam": "ipv4", 00:30:08.659 "trsvcid": "$NVMF_PORT", 00:30:08.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.659 "hdgst": ${hdgst:-false}, 00:30:08.659 "ddgst": ${ddgst:-false} 00:30:08.659 }, 00:30:08.659 "method": "bdev_nvme_attach_controller" 00:30:08.659 } 00:30:08.659 EOF 00:30:08.659 )") 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:08.659 07:36:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:08.659 "params": { 00:30:08.659 "name": "Nvme1", 00:30:08.659 "trtype": "tcp", 00:30:08.659 "traddr": "10.0.0.2", 00:30:08.659 "adrfam": "ipv4", 00:30:08.659 "trsvcid": "4420", 00:30:08.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:08.659 "hdgst": false, 00:30:08.659 "ddgst": false 00:30:08.659 }, 00:30:08.659 "method": "bdev_nvme_attach_controller" 00:30:08.659 }' 00:30:08.659 [2024-07-25 07:36:15.961380] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:30:08.659 [2024-07-25 07:36:15.961432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278421 ] 00:30:08.659 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.659 [2024-07-25 07:36:16.020055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.921 [2024-07-25 07:36:16.085180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.182 Running I/O for 1 seconds... 00:30:10.125 00:30:10.125 Latency(us) 00:30:10.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.125 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:10.125 Verification LBA range: start 0x0 length 0x4000 00:30:10.125 Nvme1n1 : 1.00 8663.27 33.84 0.00 0.00 14705.67 1897.81 15837.87 00:30:10.125 =================================================================================================================== 00:30:10.125 Total : 8663.27 33.84 0.00 0.00 14705.67 1897.81 15837.87 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=278762 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:10.387 { 00:30:10.387 "params": { 00:30:10.387 "name": "Nvme$subsystem", 00:30:10.387 "trtype": "$TEST_TRANSPORT", 00:30:10.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.387 "adrfam": "ipv4", 00:30:10.387 "trsvcid": "$NVMF_PORT", 00:30:10.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.387 "hdgst": ${hdgst:-false}, 00:30:10.387 "ddgst": ${ddgst:-false} 00:30:10.387 }, 00:30:10.387 "method": "bdev_nvme_attach_controller" 00:30:10.387 } 00:30:10.387 EOF 00:30:10.387 )") 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:10.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:10.387 "params": { 00:30:10.387 "name": "Nvme1", 00:30:10.387 "trtype": "tcp", 00:30:10.387 "traddr": "10.0.0.2", 00:30:10.387 "adrfam": "ipv4", 00:30:10.387 "trsvcid": "4420", 00:30:10.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:10.387 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:10.387 "hdgst": false, 00:30:10.387 "ddgst": false 00:30:10.387 }, 00:30:10.387 "method": "bdev_nvme_attach_controller" 00:30:10.387 }' 00:30:10.387 [2024-07-25 07:36:17.575663] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:30:10.387 [2024-07-25 07:36:17.575717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278762 ] 00:30:10.387 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.387 [2024-07-25 07:36:17.634494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.387 [2024-07-25 07:36:17.697617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.648 Running I/O for 15 seconds... 00:30:13.198 07:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 278373 00:30:13.198 07:36:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:13.198 [2024-07-25 07:36:20.540719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.198 [2024-07-25 07:36:20.540763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.198 [2024-07-25 07:36:20.540793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.198 [2024-07-25 07:36:20.540812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.198 [2024-07-25 07:36:20.540831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.198 [2024-07-25 07:36:20.540854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.198 [2024-07-25 07:36:20.540875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.198 [2024-07-25 07:36:20.540894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.540913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.540930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.540948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.540966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.540978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.540991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.198 [2024-07-25 07:36:20.541345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.198 [2024-07-25 07:36:20.541355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.541988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.199 [2024-07-25 07:36:20.541995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.199 [2024-07-25 07:36:20.542004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.200 [2024-07-25 07:36:20.542713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.200 [2024-07-25 07:36:20.542755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.200 [2024-07-25 07:36:20.542762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.542989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.542996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.543005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.543013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.543022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.543029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.543039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.543045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.543055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.201 [2024-07-25 07:36:20.543062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.543071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc20900 is same with the state(5) to be set 00:30:13.201 [2024-07-25 07:36:20.543080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.201 [2024-07-25 07:36:20.543088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.201 [2024-07-25 07:36:20.543095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86072 len:8 PRP1 0x0 PRP2 0x0 00:30:13.201 [2024-07-25 07:36:20.543102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.201 [2024-07-25 07:36:20.543144] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc20900 was disconnected and freed. reset controller. 00:30:13.201 [2024-07-25 07:36:20.546726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.201 [2024-07-25 07:36:20.546775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.201 [2024-07-25 07:36:20.547769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.201 [2024-07-25 07:36:20.547807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.201 [2024-07-25 07:36:20.547818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.201 [2024-07-25 07:36:20.548058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.201 [2024-07-25 07:36:20.548287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.201 [2024-07-25 07:36:20.548297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.201 [2024-07-25 07:36:20.548306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.201 [2024-07-25 07:36:20.551810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.201 [2024-07-25 07:36:20.560900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.201 [2024-07-25 07:36:20.561692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.201 [2024-07-25 07:36:20.561731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.201 [2024-07-25 07:36:20.561742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.201 [2024-07-25 07:36:20.561981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.201 [2024-07-25 07:36:20.562208] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.201 [2024-07-25 07:36:20.562218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.201 [2024-07-25 07:36:20.562226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.463 [2024-07-25 07:36:20.565740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.463 [2024-07-25 07:36:20.574657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.463 [2024-07-25 07:36:20.575188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.463 [2024-07-25 07:36:20.575235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.463 [2024-07-25 07:36:20.575247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.463 [2024-07-25 07:36:20.575484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.463 [2024-07-25 07:36:20.575705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.463 [2024-07-25 07:36:20.575715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.463 [2024-07-25 07:36:20.575723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.463 [2024-07-25 07:36:20.579235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.463 [2024-07-25 07:36:20.588547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.463 [2024-07-25 07:36:20.589300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.463 [2024-07-25 07:36:20.589337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.463 [2024-07-25 07:36:20.589353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.463 [2024-07-25 07:36:20.589590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.463 [2024-07-25 07:36:20.589811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.463 [2024-07-25 07:36:20.589820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.463 [2024-07-25 07:36:20.589828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.463 [2024-07-25 07:36:20.593344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.463 [2024-07-25 07:36:20.602441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.463 [2024-07-25 07:36:20.603221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.463 [2024-07-25 07:36:20.603259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.463 [2024-07-25 07:36:20.603271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.463 [2024-07-25 07:36:20.603510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.463 [2024-07-25 07:36:20.603730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.463 [2024-07-25 07:36:20.603740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.603747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.607262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.616352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.617155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.617193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.617212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.617450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.617671] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.617680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.617688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.621193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.630287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.631066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.631104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.631115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.631365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.631587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.631597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.631609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.635116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.644209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.644949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.644987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.644997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.645243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.645466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.645475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.645483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.648988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.658074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.658876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.658914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.658925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.659162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.659394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.659404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.659411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.662921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.672017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.672783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.672822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.672832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.673069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.673299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.673310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.673317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.676827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.685940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.686786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.686825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.686836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.687073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.687304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.687315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.687323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.690832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.699721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.700397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.700416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.700424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.700642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.700860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.700869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.700877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.704384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.713674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.714439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.714477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.714488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.714725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.714946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.714955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.714963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.718481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.727573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.728374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.728412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.728423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.728665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.728886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.728896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.728904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.732421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.464 [2024-07-25 07:36:20.741515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.464 [2024-07-25 07:36:20.742300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.464 [2024-07-25 07:36:20.742338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.464 [2024-07-25 07:36:20.742350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.464 [2024-07-25 07:36:20.742590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.464 [2024-07-25 07:36:20.742811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.464 [2024-07-25 07:36:20.742822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.464 [2024-07-25 07:36:20.742829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.464 [2024-07-25 07:36:20.746347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.465 [2024-07-25 07:36:20.755440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.465 [2024-07-25 07:36:20.756144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.465 [2024-07-25 07:36:20.756183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.465 [2024-07-25 07:36:20.756195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.465 [2024-07-25 07:36:20.756442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.465 [2024-07-25 07:36:20.756663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.465 [2024-07-25 07:36:20.756673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.465 [2024-07-25 07:36:20.756680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.465 [2024-07-25 07:36:20.760188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.465 [2024-07-25 07:36:20.769279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.465 [2024-07-25 07:36:20.770072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.465 [2024-07-25 07:36:20.770109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.465 [2024-07-25 07:36:20.770120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.465 [2024-07-25 07:36:20.770368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.465 [2024-07-25 07:36:20.770589] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.465 [2024-07-25 07:36:20.770599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.465 [2024-07-25 07:36:20.770611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.465 [2024-07-25 07:36:20.774119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.465 [2024-07-25 07:36:20.783048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.465 [2024-07-25 07:36:20.783862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.465 [2024-07-25 07:36:20.783900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.465 [2024-07-25 07:36:20.783910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.465 [2024-07-25 07:36:20.784147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.465 [2024-07-25 07:36:20.784379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.465 [2024-07-25 07:36:20.784389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.465 [2024-07-25 07:36:20.784396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.465 [2024-07-25 07:36:20.787912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.465 [2024-07-25 07:36:20.796795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.465 [2024-07-25 07:36:20.797566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.465 [2024-07-25 07:36:20.797605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.465 [2024-07-25 07:36:20.797615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.465 [2024-07-25 07:36:20.797853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.465 [2024-07-25 07:36:20.798073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.465 [2024-07-25 07:36:20.798083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.465 [2024-07-25 07:36:20.798090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.465 [2024-07-25 07:36:20.801603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.465 [2024-07-25 07:36:20.810692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.465 [2024-07-25 07:36:20.811488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.465 [2024-07-25 07:36:20.811527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.465 [2024-07-25 07:36:20.811537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.465 [2024-07-25 07:36:20.811774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.465 [2024-07-25 07:36:20.811995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.465 [2024-07-25 07:36:20.812005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.465 [2024-07-25 07:36:20.812012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.465 [2024-07-25 07:36:20.815524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.465 [2024-07-25 07:36:20.824555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.465 [2024-07-25 07:36:20.825310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.465 [2024-07-25 07:36:20.825352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.465 [2024-07-25 07:36:20.825364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.465 [2024-07-25 07:36:20.825605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.465 [2024-07-25 07:36:20.825826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.465 [2024-07-25 07:36:20.825836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.465 [2024-07-25 07:36:20.825844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.465 [2024-07-25 07:36:20.829363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.838464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.839265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.839303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.839315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.839556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.839777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.839788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.839796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.843316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.852400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.853186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.853231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.853242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.853480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.853701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.853710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.853718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.857228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.866314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.867108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.867146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.867158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.867407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.867635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.867644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.867652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.871156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.880284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.881054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.881092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.881102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.881358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.881580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.881590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.881597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.885103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.894203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.894948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.894986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.894996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.895244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.895465] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.895474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.895482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.898987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.908072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.908845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.908883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.908894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.909130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.909362] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.909373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.909380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.912886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.921979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.922764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.922802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.922813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.923050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.923280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.923290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.923298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.926803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.935888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.936596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.936616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.936623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.936842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.937059] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.937068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.937075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.940581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.949706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.950367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.950384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.950392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.950609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.950826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.950835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.950841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.954345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.963629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.964412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.726 [2024-07-25 07:36:20.964449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.726 [2024-07-25 07:36:20.964464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.726 [2024-07-25 07:36:20.964702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.726 [2024-07-25 07:36:20.964922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.726 [2024-07-25 07:36:20.964932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.726 [2024-07-25 07:36:20.964940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.726 [2024-07-25 07:36:20.968453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.726 [2024-07-25 07:36:20.977539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.726 [2024-07-25 07:36:20.978365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:20.978403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:20.978414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:20.978652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:20.978873] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:20.978883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:20.978890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:20.982417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:20.991337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:20.992124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:20.992162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:20.992173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:20.992420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:20.992642] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:20.992651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:20.992659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:20.996162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:21.005250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:21.006051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:21.006089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:21.006100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:21.006346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:21.006568] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:21.006582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:21.006590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:21.010095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:21.019194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:21.019978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:21.020016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:21.020027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:21.020274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:21.020496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:21.020506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:21.020514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:21.024021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:21.033109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:21.033912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:21.033949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:21.033960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:21.034197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:21.034429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:21.034439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:21.034446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:21.037950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:21.047037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:21.047802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:21.047840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:21.047851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:21.048088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:21.048320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:21.048330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:21.048338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:21.051847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:21.060950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:21.061771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:21.061810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:21.061820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:21.062057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:21.062286] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:21.062296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:21.062304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:21.065810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:21.074900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:21.075602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:21.075622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:21.075629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:21.075847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:21.076065] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:21.076074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:21.076082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:21.079588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.727 [2024-07-25 07:36:21.088686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.727 [2024-07-25 07:36:21.089389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.727 [2024-07-25 07:36:21.089408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.727 [2024-07-25 07:36:21.089415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.727 [2024-07-25 07:36:21.089632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.727 [2024-07-25 07:36:21.089849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.727 [2024-07-25 07:36:21.089858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.727 [2024-07-25 07:36:21.089865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.727 [2024-07-25 07:36:21.093374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.988 [2024-07-25 07:36:21.102460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.988 [2024-07-25 07:36:21.103246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.988 [2024-07-25 07:36:21.103284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.988 [2024-07-25 07:36:21.103294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.988 [2024-07-25 07:36:21.103537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.988 [2024-07-25 07:36:21.103758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.988 [2024-07-25 07:36:21.103767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.988 [2024-07-25 07:36:21.103775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.988 [2024-07-25 07:36:21.107290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.988 [2024-07-25 07:36:21.116374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.988 [2024-07-25 07:36:21.117140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.988 [2024-07-25 07:36:21.117178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.988 [2024-07-25 07:36:21.117188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.988 [2024-07-25 07:36:21.117435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.988 [2024-07-25 07:36:21.117657] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.988 [2024-07-25 07:36:21.117667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.988 [2024-07-25 07:36:21.117675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.988 [2024-07-25 07:36:21.121180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.130268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.131045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.131082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.131093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.131341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.131562] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.131573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.131581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.135084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.144172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.144967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.145005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.145016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.145263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.145484] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.145493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.145509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.149016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.158112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.158877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.158915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.158927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.159166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.159398] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.159408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.159416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.162921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.172005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.172763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.172801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.172811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.173049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.173281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.173291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.173299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.176807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.185911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.186668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.186707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.186717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.186954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.187184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.187195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.187213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.190718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.199833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.200506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.200548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.200559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.200796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.201017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.201027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.201035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.204550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.213636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.214340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.214360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.214368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.214585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.214803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.214812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.214819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.218323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.227424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.228218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.228255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.228267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.228506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.228727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.228736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.228744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.232261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.241365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.242130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.242168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.242179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.242425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.242651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.242661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.242668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.246175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.255284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.255949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.989 [2024-07-25 07:36:21.255969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.989 [2024-07-25 07:36:21.255977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.989 [2024-07-25 07:36:21.256195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.989 [2024-07-25 07:36:21.256419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.989 [2024-07-25 07:36:21.256430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.989 [2024-07-25 07:36:21.256437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.989 [2024-07-25 07:36:21.259940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.989 [2024-07-25 07:36:21.269037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.989 [2024-07-25 07:36:21.269838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.990 [2024-07-25 07:36:21.269877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.990 [2024-07-25 07:36:21.269887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.990 [2024-07-25 07:36:21.270124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.990 [2024-07-25 07:36:21.270355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.990 [2024-07-25 07:36:21.270366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.990 [2024-07-25 07:36:21.270374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.990 [2024-07-25 07:36:21.273886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.990 [2024-07-25 07:36:21.282999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.990 [2024-07-25 07:36:21.283714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.990 [2024-07-25 07:36:21.283733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.990 [2024-07-25 07:36:21.283741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.990 [2024-07-25 07:36:21.283959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.990 [2024-07-25 07:36:21.284176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.990 [2024-07-25 07:36:21.284185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.990 [2024-07-25 07:36:21.284192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.990 [2024-07-25 07:36:21.287711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.990 [2024-07-25 07:36:21.296818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.990 [2024-07-25 07:36:21.297484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.990 [2024-07-25 07:36:21.297523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.990 [2024-07-25 07:36:21.297534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.990 [2024-07-25 07:36:21.297772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.990 [2024-07-25 07:36:21.297992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.990 [2024-07-25 07:36:21.298002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.990 [2024-07-25 07:36:21.298009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.990 [2024-07-25 07:36:21.301531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.990 [2024-07-25 07:36:21.310643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.990 [2024-07-25 07:36:21.311428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.990 [2024-07-25 07:36:21.311466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.990 [2024-07-25 07:36:21.311477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.990 [2024-07-25 07:36:21.311714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.990 [2024-07-25 07:36:21.311935] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.990 [2024-07-25 07:36:21.311944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.990 [2024-07-25 07:36:21.311952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.990 [2024-07-25 07:36:21.315472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.990 [2024-07-25 07:36:21.324664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.990 [2024-07-25 07:36:21.325477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.990 [2024-07-25 07:36:21.325515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.990 [2024-07-25 07:36:21.325526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.990 [2024-07-25 07:36:21.325763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.990 [2024-07-25 07:36:21.325984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.990 [2024-07-25 07:36:21.325994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.990 [2024-07-25 07:36:21.326002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.990 [2024-07-25 07:36:21.329510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.990 [2024-07-25 07:36:21.338598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.990 [2024-07-25 07:36:21.339311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.990 [2024-07-25 07:36:21.339349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.990 [2024-07-25 07:36:21.339366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.990 [2024-07-25 07:36:21.339607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.990 [2024-07-25 07:36:21.339829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.990 [2024-07-25 07:36:21.339839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.990 [2024-07-25 07:36:21.339846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.990 [2024-07-25 07:36:21.343360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.990 [2024-07-25 07:36:21.352448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.990 [2024-07-25 07:36:21.353116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.990 [2024-07-25 07:36:21.353135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:13.990 [2024-07-25 07:36:21.353143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:13.990 [2024-07-25 07:36:21.353366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:13.990 [2024-07-25 07:36:21.353584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.990 [2024-07-25 07:36:21.353594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.990 [2024-07-25 07:36:21.353601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.357108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.366216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.366962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.367000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.367011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.367257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.367479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.367489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.367497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.371002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.380088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.380805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.380825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.380833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.381050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.381273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.381287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.381295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.384808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.393899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.394612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.394651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.394662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.394900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.395121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.395131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.395138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.398655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.407796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.408582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.408621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.408632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.408869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.409091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.409101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.409108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.412628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.421738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.422294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.422333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.422345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.422584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.422805] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.422814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.422822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.426335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.435632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.436465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.436504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.436516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.436754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.436975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.436985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.436992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.440507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.449391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.450090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.450109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.450117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.450341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.450560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.450569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.450576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.454082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.463176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.463835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.463852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.463859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.464076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.464299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.464310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.464317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.467821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.476918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.477667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.477705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.477716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.477958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.478179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.478189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.478197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.253 [2024-07-25 07:36:21.481712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.253 [2024-07-25 07:36:21.490818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.253 [2024-07-25 07:36:21.491579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.253 [2024-07-25 07:36:21.491618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.253 [2024-07-25 07:36:21.491628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.253 [2024-07-25 07:36:21.491865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.253 [2024-07-25 07:36:21.492086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.253 [2024-07-25 07:36:21.492097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.253 [2024-07-25 07:36:21.492104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.495618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.504716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.505502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.505541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.505552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.505789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.506010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.506019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.506027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.509540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.518633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.519406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.519444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.519455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.519693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.519914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.519923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.519936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.523446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.532538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.533407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.533445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.533456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.533693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.533914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.533924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.533931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.537454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.546350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.547025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.547045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.547054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.547278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.547496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.547505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.547512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.551018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.560118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.560877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.560916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.560926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.561163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.561395] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.561406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.561414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.564922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.574025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.574835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.574878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.574889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.575126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.575358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.575368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.575376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.578887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.587886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.588599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.588619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.588627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.588845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.589070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.589080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.589087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.592598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.601700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.602481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.602520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.602530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.602767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.602988] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.602999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.603006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.254 [2024-07-25 07:36:21.606519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.254 [2024-07-25 07:36:21.615638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.254 [2024-07-25 07:36:21.616436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-25 07:36:21.616474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.254 [2024-07-25 07:36:21.616485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.254 [2024-07-25 07:36:21.616722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.254 [2024-07-25 07:36:21.616947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.254 [2024-07-25 07:36:21.616957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.254 [2024-07-25 07:36:21.616965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.517 [2024-07-25 07:36:21.620484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.517 [2024-07-25 07:36:21.629587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.517 [2024-07-25 07:36:21.630379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-25 07:36:21.630418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.517 [2024-07-25 07:36:21.630429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.517 [2024-07-25 07:36:21.630667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.517 [2024-07-25 07:36:21.630889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.517 [2024-07-25 07:36:21.630899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.517 [2024-07-25 07:36:21.630907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.517 [2024-07-25 07:36:21.634422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.517 [2024-07-25 07:36:21.643523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.517 [2024-07-25 07:36:21.644250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-25 07:36:21.644276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.517 [2024-07-25 07:36:21.644284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.517 [2024-07-25 07:36:21.644507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.517 [2024-07-25 07:36:21.644725] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.517 [2024-07-25 07:36:21.644735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.517 [2024-07-25 07:36:21.644742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.517 [2024-07-25 07:36:21.648298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.517 [2024-07-25 07:36:21.657402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.517 [2024-07-25 07:36:21.658094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-25 07:36:21.658112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.517 [2024-07-25 07:36:21.658120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.517 [2024-07-25 07:36:21.658342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.517 [2024-07-25 07:36:21.658560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.517 [2024-07-25 07:36:21.658569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.517 [2024-07-25 07:36:21.658576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.517 [2024-07-25 07:36:21.662080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.517 [2024-07-25 07:36:21.671189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.517 [2024-07-25 07:36:21.671993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-25 07:36:21.672031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.517 [2024-07-25 07:36:21.672042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.517 [2024-07-25 07:36:21.672289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.517 [2024-07-25 07:36:21.672511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.517 [2024-07-25 07:36:21.672521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.517 [2024-07-25 07:36:21.672529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.517 [2024-07-25 07:36:21.676039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.517 [2024-07-25 07:36:21.684951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.517 [2024-07-25 07:36:21.685446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-25 07:36:21.685470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.517 [2024-07-25 07:36:21.685478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.517 [2024-07-25 07:36:21.685699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.517 [2024-07-25 07:36:21.685917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.517 [2024-07-25 07:36:21.685927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.517 [2024-07-25 07:36:21.685934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.517 [2024-07-25 07:36:21.689467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.517 [2024-07-25 07:36:21.698777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.517 [2024-07-25 07:36:21.699525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-25 07:36:21.699563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.517 [2024-07-25 07:36:21.699574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.517 [2024-07-25 07:36:21.699811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.517 [2024-07-25 07:36:21.700031] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.517 [2024-07-25 07:36:21.700041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.517 [2024-07-25 07:36:21.700049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.517 [2024-07-25 07:36:21.703573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.517 [2024-07-25 07:36:21.712720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.517 [2024-07-25 07:36:21.713517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-25 07:36:21.713556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.517 [2024-07-25 07:36:21.713571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.517 [2024-07-25 07:36:21.713808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.517 [2024-07-25 07:36:21.714029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.714039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.714046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.717569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.726677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.727454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.727474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.727482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.727699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.727917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.727926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.727935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.731445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.740547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.741244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.741267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.741276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.741497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.741715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.741725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.741732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.745245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.754342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.755029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.755046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.755054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.755277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.755495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.755508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.755515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.759022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.768118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.768797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.768835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.768847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.769083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.769312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.769323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.769330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.772836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.781983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.782729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.782767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.782778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.783015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.783244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.783254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.783262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.786769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.795868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.796618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.796657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.796667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.796904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.797125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.797135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.797143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.800657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.809749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.810465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.810484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.810492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.810710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.810927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.810937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.810944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.814449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.823565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.824415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.824452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.824463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.824700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.824920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.824930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.824938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.828452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.837336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.838041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.838061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.838069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.838292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.838510] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.838519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.518 [2024-07-25 07:36:21.838526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.518 [2024-07-25 07:36:21.842161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.518 [2024-07-25 07:36:21.851259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.518 [2024-07-25 07:36:21.852049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-25 07:36:21.852087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.518 [2024-07-25 07:36:21.852098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.518 [2024-07-25 07:36:21.852350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.518 [2024-07-25 07:36:21.852573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.518 [2024-07-25 07:36:21.852583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.519 [2024-07-25 07:36:21.852590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.519 [2024-07-25 07:36:21.856094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.519 [2024-07-25 07:36:21.865188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.519 [2024-07-25 07:36:21.865931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-25 07:36:21.865969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.519 [2024-07-25 07:36:21.865980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.519 [2024-07-25 07:36:21.866225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.519 [2024-07-25 07:36:21.866447] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.519 [2024-07-25 07:36:21.866457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.519 [2024-07-25 07:36:21.866464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.519 [2024-07-25 07:36:21.869970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.519 [2024-07-25 07:36:21.879062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.519 [2024-07-25 07:36:21.879841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-25 07:36:21.879879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.519 [2024-07-25 07:36:21.879890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.519 [2024-07-25 07:36:21.880127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.519 [2024-07-25 07:36:21.880359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.519 [2024-07-25 07:36:21.880369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.519 [2024-07-25 07:36:21.880377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.781 [2024-07-25 07:36:21.883896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.781 [2024-07-25 07:36:21.892999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.781 [2024-07-25 07:36:21.893687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.781 [2024-07-25 07:36:21.893707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.781 [2024-07-25 07:36:21.893715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.781 [2024-07-25 07:36:21.893932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.781 [2024-07-25 07:36:21.894150] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.781 [2024-07-25 07:36:21.894159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.781 [2024-07-25 07:36:21.894171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.781 [2024-07-25 07:36:21.897676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.781 [2024-07-25 07:36:21.906762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.781 [2024-07-25 07:36:21.907552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.781 [2024-07-25 07:36:21.907591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.781 [2024-07-25 07:36:21.907601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.781 [2024-07-25 07:36:21.907839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.781 [2024-07-25 07:36:21.908060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.781 [2024-07-25 07:36:21.908070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.781 [2024-07-25 07:36:21.908078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.781 [2024-07-25 07:36:21.911592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.781 [2024-07-25 07:36:21.920684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.781 [2024-07-25 07:36:21.921530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.781 [2024-07-25 07:36:21.921568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.781 [2024-07-25 07:36:21.921580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.781 [2024-07-25 07:36:21.921817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.781 [2024-07-25 07:36:21.922037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.781 [2024-07-25 07:36:21.922047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.781 [2024-07-25 07:36:21.922055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.781 [2024-07-25 07:36:21.925566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.781 [2024-07-25 07:36:21.934451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.781 [2024-07-25 07:36:21.935113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.781 [2024-07-25 07:36:21.935132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.781 [2024-07-25 07:36:21.935140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.781 [2024-07-25 07:36:21.935363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.781 [2024-07-25 07:36:21.935580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.781 [2024-07-25 07:36:21.935590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.781 [2024-07-25 07:36:21.935597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.781 [2024-07-25 07:36:21.939095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.781 [2024-07-25 07:36:21.948389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.781 [2024-07-25 07:36:21.949052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.781 [2024-07-25 07:36:21.949073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:21.949081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:21.949303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:21.949521] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:21.949531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:21.949538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:21.953039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:21.962329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:21.963110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:21.963149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:21.963160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:21.963409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:21.963631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:21.963640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:21.963648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:21.967153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:21.976245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:21.977022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:21.977060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:21.977071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:21.977315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:21.977537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:21.977547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:21.977554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:21.981060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:21.990162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:21.990973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:21.991011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:21.991022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:21.991267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:21.991493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:21.991504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:21.991511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:21.995017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:22.004103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:22.004903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:22.004941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:22.004952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:22.005189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:22.005420] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:22.005430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:22.005437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:22.008943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:22.018030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:22.018783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:22.018822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:22.018832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:22.019069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:22.019300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:22.019310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:22.019317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:22.022822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:22.031942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:22.032696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:22.032734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:22.032744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:22.032981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:22.033211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:22.033221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:22.033229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:22.036742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:22.045826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:22.046492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:22.046512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:22.046520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:22.046738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:22.046955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:22.046964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:22.046971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:22.050476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:22.059971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:22.060648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:22.060665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:22.060673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:22.060890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:22.061108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:22.061116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:22.061124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:22.064628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:22.073917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:22.074578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:22.074595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.782 [2024-07-25 07:36:22.074603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.782 [2024-07-25 07:36:22.074819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.782 [2024-07-25 07:36:22.075036] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.782 [2024-07-25 07:36:22.075045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.782 [2024-07-25 07:36:22.075052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.782 [2024-07-25 07:36:22.078552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.782 [2024-07-25 07:36:22.087846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.782 [2024-07-25 07:36:22.088604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.782 [2024-07-25 07:36:22.088643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.783 [2024-07-25 07:36:22.088658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.783 [2024-07-25 07:36:22.088895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.783 [2024-07-25 07:36:22.089116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.783 [2024-07-25 07:36:22.089126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.783 [2024-07-25 07:36:22.089134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.783 [2024-07-25 07:36:22.092659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.783 [2024-07-25 07:36:22.101747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.783 [2024-07-25 07:36:22.102512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.783 [2024-07-25 07:36:22.102550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.783 [2024-07-25 07:36:22.102561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.783 [2024-07-25 07:36:22.102797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.783 [2024-07-25 07:36:22.103018] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.783 [2024-07-25 07:36:22.103027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.783 [2024-07-25 07:36:22.103035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.783 [2024-07-25 07:36:22.106548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.783 [2024-07-25 07:36:22.115637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.783 [2024-07-25 07:36:22.116464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.783 [2024-07-25 07:36:22.116502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.783 [2024-07-25 07:36:22.116513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.783 [2024-07-25 07:36:22.116750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.783 [2024-07-25 07:36:22.116970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.783 [2024-07-25 07:36:22.116980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.783 [2024-07-25 07:36:22.116988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.783 [2024-07-25 07:36:22.120503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.783 [2024-07-25 07:36:22.129383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.783 [2024-07-25 07:36:22.130142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.783 [2024-07-25 07:36:22.130180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.783 [2024-07-25 07:36:22.130190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.783 [2024-07-25 07:36:22.130435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.783 [2024-07-25 07:36:22.130657] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.783 [2024-07-25 07:36:22.130671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.783 [2024-07-25 07:36:22.130678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.783 [2024-07-25 07:36:22.134184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.783 [2024-07-25 07:36:22.143275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.783 [2024-07-25 07:36:22.144036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.783 [2024-07-25 07:36:22.144074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:14.783 [2024-07-25 07:36:22.144084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:14.783 [2024-07-25 07:36:22.144331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:14.783 [2024-07-25 07:36:22.144552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.783 [2024-07-25 07:36:22.144562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.783 [2024-07-25 07:36:22.144569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.148075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.157169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.157954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.157993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.158003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.158250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.158472] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.158482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.158490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.161997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.171082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.171884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.171922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.171932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.172169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.172400] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.172411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.172419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.175925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.185021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.185786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.185825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.185835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.186072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.186302] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.186313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.186321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.189825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.198927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.199680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.199718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.199729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.199966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.200187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.200197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.200214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.203720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.212806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.213596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.213634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.213645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.213882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.214103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.214113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.214120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.217636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.226734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.227499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.227537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.227548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.227790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.228013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.228023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.228031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.231546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.240666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.241539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.241577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.241588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.241825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.242046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.242056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.242064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.245579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.045 [2024-07-25 07:36:22.254462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.045 [2024-07-25 07:36:22.255265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.045 [2024-07-25 07:36:22.255311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.045 [2024-07-25 07:36:22.255323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.045 [2024-07-25 07:36:22.255563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.045 [2024-07-25 07:36:22.255784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.045 [2024-07-25 07:36:22.255793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.045 [2024-07-25 07:36:22.255801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.045 [2024-07-25 07:36:22.259314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.268398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.269184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.269229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.269239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.269477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.269698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.269707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.269719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.273231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.282328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.283123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.283161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.283172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.283417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.283639] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.283648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.283656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.287160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.296261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.297048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.297086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.297096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.297342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.297564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.297574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.297582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.301088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.310176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.310922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.310961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.310972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.311219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.311441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.311451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.311459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.314968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.324064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.324826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.324869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.324880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.325118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.325347] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.325357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.325365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.328869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.337956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.338647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.338666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.338675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.338892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.339109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.339118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.339125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.342628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.351710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.352302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.352341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.352353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.352593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.352814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.352824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.352831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.356345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.365639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.366441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.366479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.366489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.366727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.366952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.366962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.366970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.370487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.379574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.380276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.380314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.380324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.380561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.380782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.380791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.380799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.046 [2024-07-25 07:36:22.384321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.046 [2024-07-25 07:36:22.393413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.046 [2024-07-25 07:36:22.394218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.046 [2024-07-25 07:36:22.394256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.046 [2024-07-25 07:36:22.394268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.046 [2024-07-25 07:36:22.394507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.046 [2024-07-25 07:36:22.394728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.046 [2024-07-25 07:36:22.394737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.046 [2024-07-25 07:36:22.394745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.047 [2024-07-25 07:36:22.398258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.047 [2024-07-25 07:36:22.407348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.047 [2024-07-25 07:36:22.408147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.047 [2024-07-25 07:36:22.408186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.047 [2024-07-25 07:36:22.408196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.047 [2024-07-25 07:36:22.408442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.047 [2024-07-25 07:36:22.408663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.047 [2024-07-25 07:36:22.408673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.047 [2024-07-25 07:36:22.408680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.309 [2024-07-25 07:36:22.412192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.309 [2024-07-25 07:36:22.421287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.309 [2024-07-25 07:36:22.422048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.309 [2024-07-25 07:36:22.422086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.309 [2024-07-25 07:36:22.422098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.309 [2024-07-25 07:36:22.422345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.309 [2024-07-25 07:36:22.422567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.309 [2024-07-25 07:36:22.422578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.309 [2024-07-25 07:36:22.422586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.309 [2024-07-25 07:36:22.426093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.309 [2024-07-25 07:36:22.435184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.309 [2024-07-25 07:36:22.435989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.309 [2024-07-25 07:36:22.436027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.309 [2024-07-25 07:36:22.436038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.309 [2024-07-25 07:36:22.436283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.309 [2024-07-25 07:36:22.436504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.309 [2024-07-25 07:36:22.436514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.309 [2024-07-25 07:36:22.436521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.309 [2024-07-25 07:36:22.440028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.309 [2024-07-25 07:36:22.448941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.309 [2024-07-25 07:36:22.449751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.309 [2024-07-25 07:36:22.449789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.309 [2024-07-25 07:36:22.449800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.309 [2024-07-25 07:36:22.450038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.309 [2024-07-25 07:36:22.450266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.309 [2024-07-25 07:36:22.450277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.309 [2024-07-25 07:36:22.450284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.309 [2024-07-25 07:36:22.453791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.309 [2024-07-25 07:36:22.462877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.309 [2024-07-25 07:36:22.463650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.309 [2024-07-25 07:36:22.463688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.309 [2024-07-25 07:36:22.463704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.309 [2024-07-25 07:36:22.463941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.309 [2024-07-25 07:36:22.464162] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.309 [2024-07-25 07:36:22.464173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.309 [2024-07-25 07:36:22.464181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.309 [2024-07-25 07:36:22.467693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.309 [2024-07-25 07:36:22.476779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.309 [2024-07-25 07:36:22.477553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.309 [2024-07-25 07:36:22.477592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.309 [2024-07-25 07:36:22.477604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.309 [2024-07-25 07:36:22.477842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.309 [2024-07-25 07:36:22.478063] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.309 [2024-07-25 07:36:22.478073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.309 [2024-07-25 07:36:22.478080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.309 [2024-07-25 07:36:22.481602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.309 [2024-07-25 07:36:22.490692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.491481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.491519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.491530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.491767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.491997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.492007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.492015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.495530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.504621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.505444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.505483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.505494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.505731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.505951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.505966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.505973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.509487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.518373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.519078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.519097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.519105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.519327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.519545] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.519554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.519562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.523059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.532138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.532868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.532907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.532919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.533156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.533386] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.533396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.533404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.536909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.545995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.546573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.546592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.546600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.546818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.547036] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.547046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.547053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.550561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.559851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.560705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.560743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.560754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.560991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.561221] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.561232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.561240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.564749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.573629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.574467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.574506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.574517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.574755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.574978] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.574988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.574995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.578509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.587403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.588239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.588278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.588288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.588525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.588746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.588756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.588763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.592288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.601169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.601912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.601949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.601960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.602210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.602431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.602441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.602449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.605954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.615042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.615802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.310 [2024-07-25 07:36:22.615840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.310 [2024-07-25 07:36:22.615850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.310 [2024-07-25 07:36:22.616087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.310 [2024-07-25 07:36:22.616318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.310 [2024-07-25 07:36:22.616327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.310 [2024-07-25 07:36:22.616335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.310 [2024-07-25 07:36:22.619844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.310 [2024-07-25 07:36:22.629009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.310 [2024-07-25 07:36:22.629690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.311 [2024-07-25 07:36:22.629710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.311 [2024-07-25 07:36:22.629718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.311 [2024-07-25 07:36:22.629936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.311 [2024-07-25 07:36:22.630153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.311 [2024-07-25 07:36:22.630162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.311 [2024-07-25 07:36:22.630169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.311 [2024-07-25 07:36:22.633676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.311 [2024-07-25 07:36:22.642758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.311 [2024-07-25 07:36:22.643523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.311 [2024-07-25 07:36:22.643562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.311 [2024-07-25 07:36:22.643572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.311 [2024-07-25 07:36:22.643809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.311 [2024-07-25 07:36:22.644030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.311 [2024-07-25 07:36:22.644040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.311 [2024-07-25 07:36:22.644051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.311 [2024-07-25 07:36:22.647567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.311 [2024-07-25 07:36:22.656686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.311 [2024-07-25 07:36:22.657389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.311 [2024-07-25 07:36:22.657427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.311 [2024-07-25 07:36:22.657438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.311 [2024-07-25 07:36:22.657675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.311 [2024-07-25 07:36:22.657895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.311 [2024-07-25 07:36:22.657905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.311 [2024-07-25 07:36:22.657913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.311 [2024-07-25 07:36:22.661429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.311 [2024-07-25 07:36:22.670515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.311 [2024-07-25 07:36:22.671280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.311 [2024-07-25 07:36:22.671326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.311 [2024-07-25 07:36:22.671336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.311 [2024-07-25 07:36:22.671573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.311 [2024-07-25 07:36:22.671794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.311 [2024-07-25 07:36:22.671804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.311 [2024-07-25 07:36:22.671812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.311 [2024-07-25 07:36:22.675327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.684425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.685245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.685284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.685294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.685531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.685752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.685762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.685770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.574 [2024-07-25 07:36:22.689285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.698170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.698967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.699009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.699020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.699268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.699489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.699499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.699506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.574 [2024-07-25 07:36:22.703012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.712103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.712866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.712905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.712915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.713153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.713383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.713393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.713401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.574 [2024-07-25 07:36:22.716907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.725993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.726801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.726838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.726849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.727086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.727316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.727326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.727334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.574 [2024-07-25 07:36:22.730841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.739926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.740686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.740724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.740734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.740971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.741197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.741216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.741224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.574 [2024-07-25 07:36:22.744732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.753816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.754591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.754629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.754640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.754877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.755098] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.755108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.755115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.574 [2024-07-25 07:36:22.758629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.767714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.768517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.768556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.768566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.768803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.769025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.769034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.769042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.574 [2024-07-25 07:36:22.772556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.574 [2024-07-25 07:36:22.781643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.574 [2024-07-25 07:36:22.782451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.574 [2024-07-25 07:36:22.782489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.574 [2024-07-25 07:36:22.782501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.574 [2024-07-25 07:36:22.782740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.574 [2024-07-25 07:36:22.782961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.574 [2024-07-25 07:36:22.782970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.574 [2024-07-25 07:36:22.782978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.786508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.795602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.796406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.796444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.796456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.796695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.796915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.796925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.796933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.800526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.809417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.810178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.810221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.810233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.810470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.810691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.810701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.810708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.814220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.823310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.824101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.824139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.824149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.824395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.824618] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.824627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.824635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.828138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.837224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.838021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.838060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.838074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.838321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.838543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.838553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.838560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.842064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.851152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.851960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.851998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.852008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.852255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.852477] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.852486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.852494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.855998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.864913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.865507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.865545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.865556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.865793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.866014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.866024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.866031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.869547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.878840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.879614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.879652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.879663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.879900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.880121] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.880134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.880142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.883664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.892756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.893522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.893560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.893572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.893820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.894042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.894052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.894059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.897573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.906660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.907465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.907503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.907513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.907751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.907972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.575 [2024-07-25 07:36:22.907982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.575 [2024-07-25 07:36:22.907990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.575 [2024-07-25 07:36:22.911505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.575 [2024-07-25 07:36:22.920592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.575 [2024-07-25 07:36:22.921281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.575 [2024-07-25 07:36:22.921318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.575 [2024-07-25 07:36:22.921329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.575 [2024-07-25 07:36:22.921566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.575 [2024-07-25 07:36:22.921787] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.576 [2024-07-25 07:36:22.921796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.576 [2024-07-25 07:36:22.921804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.576 [2024-07-25 07:36:22.925319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.576 [2024-07-25 07:36:22.934406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.576 [2024-07-25 07:36:22.935185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.576 [2024-07-25 07:36:22.935230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.576 [2024-07-25 07:36:22.935242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.576 [2024-07-25 07:36:22.935480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.576 [2024-07-25 07:36:22.935701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.576 [2024-07-25 07:36:22.935711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.576 [2024-07-25 07:36:22.935718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.576 [2024-07-25 07:36:22.939229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:22.948319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:22.949078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:22.949116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:22.949126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:22.949372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:22.949594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:22.949603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:22.949611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:22.953116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:22.962208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:22.962984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:22.963022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:22.963032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:22.963277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:22.963498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:22.963508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:22.963516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:22.967020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:22.976112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:22.976885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:22.976923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:22.976934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:22.977175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:22.977404] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:22.977414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:22.977422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:22.980928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:22.990029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:22.990790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:22.990809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:22.990817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:22.991034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:22.991255] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:22.991264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:22.991272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:22.994782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:23.003864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:23.004508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:23.004546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:23.004559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:23.004800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:23.005021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:23.005030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:23.005038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:23.008550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:23.017641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:23.018465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:23.018504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:23.018515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:23.018752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:23.018973] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:23.018983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:23.018997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:23.022513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:23.031400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:23.032238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:23.032276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:23.032288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:23.032529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:23.032750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:23.032759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:23.032767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:23.036280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:23.045161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:23.045969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:23.046007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:23.046018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:23.046263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:23.046485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:23.046494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:23.046502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.838 [2024-07-25 07:36:23.050009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.838 [2024-07-25 07:36:23.059102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.838 [2024-07-25 07:36:23.059899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-25 07:36:23.059936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.838 [2024-07-25 07:36:23.059953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.838 [2024-07-25 07:36:23.060193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.838 [2024-07-25 07:36:23.060421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.838 [2024-07-25 07:36:23.060431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.838 [2024-07-25 07:36:23.060439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.063941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.072857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.073434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.073457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.073466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.073684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.073901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.073911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.073918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.077423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.086722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.087345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.087369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.087377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.087593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.087810] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.087819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.087826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.091331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.100626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.101330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.101368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.101380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.101620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.101841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.101850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.101858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.105372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.114458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.115179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.115198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.115212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.115429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.115652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.115660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.115668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.119170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.128262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.128947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.128963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.128971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.129187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.129409] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.129418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.129426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.132926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.142011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.142810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.142848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.142860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.143098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.143328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.143338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.143345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.146850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.155943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.156649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.156669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.156677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.156894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.157111] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.157121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.157128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.160636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.169720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.170422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.170460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.170470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.170707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.170928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.170938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.170946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.174459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.183559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.184243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.184268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.184277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.184499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.184718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.184727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.184734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.188244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.839 [2024-07-25 07:36:23.197336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.839 [2024-07-25 07:36:23.198000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-25 07:36:23.198038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:15.839 [2024-07-25 07:36:23.198050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:15.839 [2024-07-25 07:36:23.198296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:15.839 [2024-07-25 07:36:23.198519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.839 [2024-07-25 07:36:23.198528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.839 [2024-07-25 07:36:23.198536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.839 [2024-07-25 07:36:23.202043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.102 [2024-07-25 07:36:23.211131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.102 [2024-07-25 07:36:23.211920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-25 07:36:23.211959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.102 [2024-07-25 07:36:23.211975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.102 [2024-07-25 07:36:23.212218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.102 [2024-07-25 07:36:23.212440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.102 [2024-07-25 07:36:23.212450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.102 [2024-07-25 07:36:23.212458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.102 [2024-07-25 07:36:23.215964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.102 [2024-07-25 07:36:23.225051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.102 [2024-07-25 07:36:23.225845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-25 07:36:23.225883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.102 [2024-07-25 07:36:23.225894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.102 [2024-07-25 07:36:23.226131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.102 [2024-07-25 07:36:23.226359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.102 [2024-07-25 07:36:23.226370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.102 [2024-07-25 07:36:23.226377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.102 [2024-07-25 07:36:23.229883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.102 [2024-07-25 07:36:23.238977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.102 [2024-07-25 07:36:23.239619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-25 07:36:23.239639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.102 [2024-07-25 07:36:23.239647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.102 [2024-07-25 07:36:23.239864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.102 [2024-07-25 07:36:23.240081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.102 [2024-07-25 07:36:23.240090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.102 [2024-07-25 07:36:23.240097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.102 [2024-07-25 07:36:23.243603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.102 [2024-07-25 07:36:23.252898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.102 [2024-07-25 07:36:23.253475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-25 07:36:23.253493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.102 [2024-07-25 07:36:23.253501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.102 [2024-07-25 07:36:23.253718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.102 [2024-07-25 07:36:23.253935] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.102 [2024-07-25 07:36:23.253953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.102 [2024-07-25 07:36:23.253960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.102 [2024-07-25 07:36:23.257475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.102 [2024-07-25 07:36:23.266773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.102 [2024-07-25 07:36:23.267563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-25 07:36:23.267602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.102 [2024-07-25 07:36:23.267612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.102 [2024-07-25 07:36:23.267850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.102 [2024-07-25 07:36:23.268071] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.102 [2024-07-25 07:36:23.268081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.102 [2024-07-25 07:36:23.268088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.102 [2024-07-25 07:36:23.271600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.102 [2024-07-25 07:36:23.280718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.102 [2024-07-25 07:36:23.281441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-25 07:36:23.281462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.102 [2024-07-25 07:36:23.281470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.102 [2024-07-25 07:36:23.281687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.102 [2024-07-25 07:36:23.281904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.102 [2024-07-25 07:36:23.281915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.102 [2024-07-25 07:36:23.281922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.102 [2024-07-25 07:36:23.285436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.102 [2024-07-25 07:36:23.294531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.102 [2024-07-25 07:36:23.295206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-25 07:36:23.295223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.102 [2024-07-25 07:36:23.295239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.102 [2024-07-25 07:36:23.295456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.102 [2024-07-25 07:36:23.295673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.102 [2024-07-25 07:36:23.295683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.102 [2024-07-25 07:36:23.295689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.299191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.308287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.308947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.308963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.308971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.309187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.309411] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.309421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.309428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.312956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.322046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.322798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.322836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.322847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.323084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.323314] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.323324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.323332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.326841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.335946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.336569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.336589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.336597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.336814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.337032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.337041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.337049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.340560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.349865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.350415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.350432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.350440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.350662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.350879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.350888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.350895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.354405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.363703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.364456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.364495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.364506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.364743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.364963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.364973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.364981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.368489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.377576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.378279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.378299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.378308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.378526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.378745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.378755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.378762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.382265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.391364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.392161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.392199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.392219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.392457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.392677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.392687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.392700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.396224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.405107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.405799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.405837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.405848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.406084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.406314] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.406324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.406332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.409840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.418931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.419645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.419683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.419694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.419931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.420152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.420161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.420169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.423686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.432781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.103 [2024-07-25 07:36:23.433503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-25 07:36:23.433542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.103 [2024-07-25 07:36:23.433554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.103 [2024-07-25 07:36:23.433793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.103 [2024-07-25 07:36:23.434013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.103 [2024-07-25 07:36:23.434023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.103 [2024-07-25 07:36:23.434030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.103 [2024-07-25 07:36:23.437540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.103 [2024-07-25 07:36:23.446638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.104 [2024-07-25 07:36:23.447319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-25 07:36:23.447362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.104 [2024-07-25 07:36:23.447374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.104 [2024-07-25 07:36:23.447613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.104 [2024-07-25 07:36:23.447834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.104 [2024-07-25 07:36:23.447843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.104 [2024-07-25 07:36:23.447851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.104 [2024-07-25 07:36:23.451365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.104 [2024-07-25 07:36:23.460450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.104 [2024-07-25 07:36:23.461013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-25 07:36:23.461032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.104 [2024-07-25 07:36:23.461040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.104 [2024-07-25 07:36:23.461263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.104 [2024-07-25 07:36:23.461481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.104 [2024-07-25 07:36:23.461491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.104 [2024-07-25 07:36:23.461498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.104 [2024-07-25 07:36:23.464999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.366 [2024-07-25 07:36:23.474303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.366 [2024-07-25 07:36:23.474965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-25 07:36:23.474981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.366 [2024-07-25 07:36:23.474989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.366 [2024-07-25 07:36:23.475210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.366 [2024-07-25 07:36:23.475428] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.366 [2024-07-25 07:36:23.475437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.366 [2024-07-25 07:36:23.475445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.366 [2024-07-25 07:36:23.478948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.366 [2024-07-25 07:36:23.488083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.366 [2024-07-25 07:36:23.488745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-25 07:36:23.488784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.366 [2024-07-25 07:36:23.488796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.366 [2024-07-25 07:36:23.489034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.366 [2024-07-25 07:36:23.489268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.366 [2024-07-25 07:36:23.489279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.366 [2024-07-25 07:36:23.489286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.366 [2024-07-25 07:36:23.492796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.366 [2024-07-25 07:36:23.501899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.366 [2024-07-25 07:36:23.502733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-25 07:36:23.502771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.366 [2024-07-25 07:36:23.502783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.366 [2024-07-25 07:36:23.503023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.366 [2024-07-25 07:36:23.503251] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.366 [2024-07-25 07:36:23.503269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.366 [2024-07-25 07:36:23.503277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.366 [2024-07-25 07:36:23.506782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.366 [2024-07-25 07:36:23.515660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.366 [2024-07-25 07:36:23.516499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-25 07:36:23.516538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.366 [2024-07-25 07:36:23.516550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.366 [2024-07-25 07:36:23.516789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.366 [2024-07-25 07:36:23.517010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.366 [2024-07-25 07:36:23.517019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.366 [2024-07-25 07:36:23.517027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.366 [2024-07-25 07:36:23.520538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.366 [2024-07-25 07:36:23.529418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.366 [2024-07-25 07:36:23.530187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-25 07:36:23.530232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.366 [2024-07-25 07:36:23.530244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.366 [2024-07-25 07:36:23.530485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.366 [2024-07-25 07:36:23.530706] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.366 [2024-07-25 07:36:23.530715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.366 [2024-07-25 07:36:23.530723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.366 [2024-07-25 07:36:23.534235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 278373 Killed "${NVMF_APP[@]}" "$@" 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.366 [2024-07-25 07:36:23.543321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.366 [2024-07-25 07:36:23.543990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.366 [2024-07-25 07:36:23.544009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.366 [2024-07-25 07:36:23.544017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.366 [2024-07-25 07:36:23.544243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.366 [2024-07-25 07:36:23.544462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.366 [2024-07-25 07:36:23.544472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.366 [2024-07-25 07:36:23.544480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=280025 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 280025 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 280025 ']' 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.366 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.367 [2024-07-25 07:36:23.547982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.367 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.367 07:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:16.367 [2024-07-25 07:36:23.557072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.557738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.557755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.557763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.557979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.558197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.558212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.558219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 [2024-07-25 07:36:23.561727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.571016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.571830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.571869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.571880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.572116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.572344] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.572355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.572362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 [2024-07-25 07:36:23.575870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.584971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.585668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.585688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.585696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.585913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.586130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.586139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.586146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 [2024-07-25 07:36:23.589651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.597894] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:30:16.367 [2024-07-25 07:36:23.597945] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.367 [2024-07-25 07:36:23.598738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.599409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.599426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.599434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.599652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.599869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.599878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.599886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 [2024-07-25 07:36:23.603395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.612686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.613487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.613526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.613537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.613775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.613996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.614005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.614013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 [2024-07-25 07:36:23.617531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.626621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.627430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.627468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.627479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.627717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.627938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.627948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.627956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.367 [2024-07-25 07:36:23.631467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.640396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.641073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.641111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.641122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.641368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.641590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.641600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.641608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 [2024-07-25 07:36:23.645114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.654211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.654920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.654958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.654973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.655218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.655439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.655449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.655457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.367 [2024-07-25 07:36:23.658961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.367 [2024-07-25 07:36:23.668150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.367 [2024-07-25 07:36:23.668867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.367 [2024-07-25 07:36:23.668887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.367 [2024-07-25 07:36:23.668896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.367 [2024-07-25 07:36:23.669114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.367 [2024-07-25 07:36:23.669337] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.367 [2024-07-25 07:36:23.669347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.367 [2024-07-25 07:36:23.669355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.368 [2024-07-25 07:36:23.672856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.368 [2024-07-25 07:36:23.681235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:16.368 [2024-07-25 07:36:23.681946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.368 [2024-07-25 07:36:23.682587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-25 07:36:23.682625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.368 [2024-07-25 07:36:23.682636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.368 [2024-07-25 07:36:23.682874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.368 [2024-07-25 07:36:23.683096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.368 [2024-07-25 07:36:23.683106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.368 [2024-07-25 07:36:23.683113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.368 [2024-07-25 07:36:23.686642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.368 [2024-07-25 07:36:23.695778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.368 [2024-07-25 07:36:23.696570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-25 07:36:23.696609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.368 [2024-07-25 07:36:23.696620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.368 [2024-07-25 07:36:23.696858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.368 [2024-07-25 07:36:23.697086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.368 [2024-07-25 07:36:23.697097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.368 [2024-07-25 07:36:23.697105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.368 [2024-07-25 07:36:23.700618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.368 [2024-07-25 07:36:23.709720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.368 [2024-07-25 07:36:23.710535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-25 07:36:23.710573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.368 [2024-07-25 07:36:23.710584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.368 [2024-07-25 07:36:23.710823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.368 [2024-07-25 07:36:23.711044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.368 [2024-07-25 07:36:23.711054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.368 [2024-07-25 07:36:23.711062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.368 [2024-07-25 07:36:23.714578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.368 [2024-07-25 07:36:23.723667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.368 [2024-07-25 07:36:23.724498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.368 [2024-07-25 07:36:23.724536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.368 [2024-07-25 07:36:23.724547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.368 [2024-07-25 07:36:23.724785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.368 [2024-07-25 07:36:23.725007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.368 [2024-07-25 07:36:23.725016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.368 [2024-07-25 07:36:23.725024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.368 [2024-07-25 07:36:23.728542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.630 [2024-07-25 07:36:23.734921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.630 [2024-07-25 07:36:23.734946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.630 [2024-07-25 07:36:23.734952] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.630 [2024-07-25 07:36:23.734957] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.630 [2024-07-25 07:36:23.734962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.630 [2024-07-25 07:36:23.735068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.630 [2024-07-25 07:36:23.735242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.630 [2024-07-25 07:36:23.735248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.630 [2024-07-25 07:36:23.737428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.630 [2024-07-25 07:36:23.737912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.630 [2024-07-25 07:36:23.737937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.630 [2024-07-25 07:36:23.737946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.630 [2024-07-25 07:36:23.738164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.630 [2024-07-25 07:36:23.738387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.630 [2024-07-25 07:36:23.738397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.630 [2024-07-25 07:36:23.738404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.630 [2024-07-25 07:36:23.741903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.630 [2024-07-25 07:36:23.751195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.630 [2024-07-25 07:36:23.751990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.630 [2024-07-25 07:36:23.752032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.752043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.752297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.752519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.752529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.752537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.756041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.764974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.765811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.765852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.765863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.766103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.766330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.766341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.766349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.769854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.778732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.779541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.779581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.779591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.779830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.780056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.780066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.780074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.783600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.792484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.793171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.793191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.793199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.793422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.793640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.793649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.793656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.797165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.806249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.806962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.806979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.806987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.807208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.807426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.807436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.807443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.810939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.820019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.820452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.820469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.820476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.820693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.820910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.820920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.820927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.824575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.833875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.834629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.834667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.834678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.834915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.835136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.835146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.835154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.838666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.847751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.848534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.848572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.848582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.848819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.849040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.849050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.849058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.852569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.861658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.862449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.862488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.631 [2024-07-25 07:36:23.862498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.631 [2024-07-25 07:36:23.862736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.631 [2024-07-25 07:36:23.862957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.631 [2024-07-25 07:36:23.862967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.631 [2024-07-25 07:36:23.862974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.631 [2024-07-25 07:36:23.866484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.631 [2024-07-25 07:36:23.875570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.631 [2024-07-25 07:36:23.876309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.631 [2024-07-25 07:36:23.876348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.876364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.876605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.876827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.876837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.876845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.880356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.889452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.890172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.890191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.890199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.890422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.890639] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.890648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.890655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.894151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.903288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.904099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.904137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.904147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.904392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.904614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.904624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.904632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.908135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.917228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.918044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.918082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.918093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.918337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.918558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.918576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.918583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.922088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.931171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.931890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.931910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.931918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.932135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.932358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.932369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.932376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.935875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.944954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.945765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.945804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.945814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.946051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.946279] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.946289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.946297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.949800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.958885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.959536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.959574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.959585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.959822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.960044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.960053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.960061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.963576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.972659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.973505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.973543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.973553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.973791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.974012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.974022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.974029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.977542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.632 [2024-07-25 07:36:23.986430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.632 [2024-07-25 07:36:23.987148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.632 [2024-07-25 07:36:23.987167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.632 [2024-07-25 07:36:23.987175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.632 [2024-07-25 07:36:23.987398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.632 [2024-07-25 07:36:23.987616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.632 [2024-07-25 07:36:23.987625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.632 [2024-07-25 07:36:23.987632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.632 [2024-07-25 07:36:23.991127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.895 [2024-07-25 07:36:24.000218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.895 [2024-07-25 07:36:24.000886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-25 07:36:24.000903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.895 [2024-07-25 07:36:24.000911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.895 [2024-07-25 07:36:24.001128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.895 [2024-07-25 07:36:24.001350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.895 [2024-07-25 07:36:24.001360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.895 [2024-07-25 07:36:24.001367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.895 [2024-07-25 07:36:24.004865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.895 [2024-07-25 07:36:24.014146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.895 [2024-07-25 07:36:24.014950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-25 07:36:24.014989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.895 [2024-07-25 07:36:24.015000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.895 [2024-07-25 07:36:24.015249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.895 [2024-07-25 07:36:24.015471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.895 [2024-07-25 07:36:24.015480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.895 [2024-07-25 07:36:24.015488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.895 [2024-07-25 07:36:24.018990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.895 [2024-07-25 07:36:24.028073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.895 [2024-07-25 07:36:24.028789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-25 07:36:24.028808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.895 [2024-07-25 07:36:24.028816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.895 [2024-07-25 07:36:24.029033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.895 [2024-07-25 07:36:24.029256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.895 [2024-07-25 07:36:24.029266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.895 [2024-07-25 07:36:24.029273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.895 [2024-07-25 07:36:24.032772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.895 [2024-07-25 07:36:24.041852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.895 [2024-07-25 07:36:24.042598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-25 07:36:24.042636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.895 [2024-07-25 07:36:24.042647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.895 [2024-07-25 07:36:24.042884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.895 [2024-07-25 07:36:24.043105] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.895 [2024-07-25 07:36:24.043115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.895 [2024-07-25 07:36:24.043123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.895 [2024-07-25 07:36:24.046630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.895 [2024-07-25 07:36:24.055713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.895 [2024-07-25 07:36:24.056409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.895 [2024-07-25 07:36:24.056436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.895 [2024-07-25 07:36:24.056444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.895 [2024-07-25 07:36:24.056668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.895 [2024-07-25 07:36:24.056886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.895 [2024-07-25 07:36:24.056897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.056908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.060473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.069565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.070184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.070229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.070240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.070479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.070700] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.070710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.070717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.074224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.083315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.084107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.084145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.084156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.084404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.084626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.084635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.084643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.088147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.097244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.098051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.098089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.098099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.098344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.098566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.098576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.098584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.102091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.111001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.111782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.111825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.111835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.112073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.112301] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.112312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.112319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.115824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.124911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.125686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.125724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.125735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.125971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.126193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.126209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.126218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.129723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.138806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.139408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.139447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.139457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.139694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.139914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.139924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.139931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.143442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.152730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.153496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.153534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.153545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.153782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.154007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.154017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.154025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.157536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.166620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.167357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.167395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.167406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.167643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.167864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.167873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.167881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.171395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.180478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.181279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.181317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.181329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.896 [2024-07-25 07:36:24.181570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.896 [2024-07-25 07:36:24.181791] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.896 [2024-07-25 07:36:24.181802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.896 [2024-07-25 07:36:24.181809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.896 [2024-07-25 07:36:24.185335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.896 [2024-07-25 07:36:24.194422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.896 [2024-07-25 07:36:24.195246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.896 [2024-07-25 07:36:24.195284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.896 [2024-07-25 07:36:24.195296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.897 [2024-07-25 07:36:24.195537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.897 [2024-07-25 07:36:24.195757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.897 [2024-07-25 07:36:24.195767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.897 [2024-07-25 07:36:24.195775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.897 [2024-07-25 07:36:24.199300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.897 [2024-07-25 07:36:24.208186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.897 [2024-07-25 07:36:24.208958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-25 07:36:24.208996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.897 [2024-07-25 07:36:24.209006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.897 [2024-07-25 07:36:24.209251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.897 [2024-07-25 07:36:24.209473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.897 [2024-07-25 07:36:24.209482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.897 [2024-07-25 07:36:24.209490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.897 [2024-07-25 07:36:24.212996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.897 [2024-07-25 07:36:24.222087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.897 [2024-07-25 07:36:24.222910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-25 07:36:24.222948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.897 [2024-07-25 07:36:24.222959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.897 [2024-07-25 07:36:24.223196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.897 [2024-07-25 07:36:24.223425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.897 [2024-07-25 07:36:24.223435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.897 [2024-07-25 07:36:24.223443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.897 [2024-07-25 07:36:24.226948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.897 [2024-07-25 07:36:24.235833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.897 [2024-07-25 07:36:24.236635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-25 07:36:24.236673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.897 [2024-07-25 07:36:24.236684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.897 [2024-07-25 07:36:24.236921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.897 [2024-07-25 07:36:24.237142] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.897 [2024-07-25 07:36:24.237151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.897 [2024-07-25 07:36:24.237159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.897 [2024-07-25 07:36:24.240672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:16.897 [2024-07-25 07:36:24.249759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:16.897 [2024-07-25 07:36:24.250453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.897 [2024-07-25 07:36:24.250492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:16.897 [2024-07-25 07:36:24.250508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:16.897 [2024-07-25 07:36:24.250745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:16.897 [2024-07-25 07:36:24.250966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:16.897 [2024-07-25 07:36:24.250975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:16.897 [2024-07-25 07:36:24.250983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:16.897 [2024-07-25 07:36:24.254497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.159 [2024-07-25 07:36:24.263587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.159 [2024-07-25 07:36:24.264375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-07-25 07:36:24.264413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.159 [2024-07-25 07:36:24.264424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.159 [2024-07-25 07:36:24.264662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.159 [2024-07-25 07:36:24.264883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.159 [2024-07-25 07:36:24.264893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.159 [2024-07-25 07:36:24.264901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.159 [2024-07-25 07:36:24.268415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.159 [2024-07-25 07:36:24.277507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.159 [2024-07-25 07:36:24.278244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.159 [2024-07-25 07:36:24.278270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.159 [2024-07-25 07:36:24.278278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.159 [2024-07-25 07:36:24.278501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.159 [2024-07-25 07:36:24.278720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.278729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.278737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.282248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.291347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.292055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.292073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.292080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.292303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.292520] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.292535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.292542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.296044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.305138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.305927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.305966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.305977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.306222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.306444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.306454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.306462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.309967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.318884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.319672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.319711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.319722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.319960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.320181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.320193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.320210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.323717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.332807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.333285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.333323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.333335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.333574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.333796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.333805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.333813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.337327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.346630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.347462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.347499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.347510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.347747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.347968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.347978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.347985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.351499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.360386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.361191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.361236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.361247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.361484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.361706] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.361716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.361723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.365234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.160 [2024-07-25 07:36:24.374326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.375103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.375142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.375155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.375407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.375630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.375642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.375651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.379157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.388262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.388802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.388840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.388850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.389087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.389316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.389327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.389335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.392841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 [2024-07-25 07:36:24.402150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.160 [2024-07-25 07:36:24.402800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.160 [2024-07-25 07:36:24.402838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.160 [2024-07-25 07:36:24.402849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.160 [2024-07-25 07:36:24.403087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.160 [2024-07-25 07:36:24.403319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.160 [2024-07-25 07:36:24.403332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.160 [2024-07-25 07:36:24.403339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.160 [2024-07-25 07:36:24.406849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.160 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.161 [2024-07-25 07:36:24.415939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.161 [2024-07-25 07:36:24.416737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-07-25 07:36:24.416776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.161 [2024-07-25 07:36:24.416786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.161 [2024-07-25 07:36:24.417024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.161 [2024-07-25 07:36:24.417253] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.161 [2024-07-25 07:36:24.417263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.161 [2024-07-25 07:36:24.417271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.161 [2024-07-25 07:36:24.417657] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.161 [2024-07-25 07:36:24.420776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.161 [2024-07-25 07:36:24.429868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.161 [2024-07-25 07:36:24.430532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-07-25 07:36:24.430570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.161 [2024-07-25 07:36:24.430580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.161 [2024-07-25 07:36:24.430818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.161 [2024-07-25 07:36:24.431039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.161 [2024-07-25 07:36:24.431049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.161 [2024-07-25 07:36:24.431056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.161 [2024-07-25 07:36:24.434566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.161 [2024-07-25 07:36:24.443652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.161 [2024-07-25 07:36:24.444471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-07-25 07:36:24.444509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.161 [2024-07-25 07:36:24.444519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.161 [2024-07-25 07:36:24.444756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.161 [2024-07-25 07:36:24.444977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.161 [2024-07-25 07:36:24.444987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.161 [2024-07-25 07:36:24.444994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.161 [2024-07-25 07:36:24.448508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.161 Malloc0 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.161 [2024-07-25 07:36:24.457595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.161 [2024-07-25 07:36:24.458493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-07-25 07:36:24.458531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.161 [2024-07-25 07:36:24.458542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.161 [2024-07-25 07:36:24.458779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.161 [2024-07-25 07:36:24.459000] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.161 [2024-07-25 07:36:24.459014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.161 [2024-07-25 07:36:24.459022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.161 [2024-07-25 07:36:24.462533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.161 [2024-07-25 07:36:24.471417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.161 [2024-07-25 07:36:24.472194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-07-25 07:36:24.472240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.161 [2024-07-25 07:36:24.472253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.161 [2024-07-25 07:36:24.472491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.161 [2024-07-25 07:36:24.472712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.161 [2024-07-25 07:36:24.472722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.161 [2024-07-25 07:36:24.472729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.161 [2024-07-25 07:36:24.476236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:17.161 [2024-07-25 07:36:24.485340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.161 [2024-07-25 07:36:24.486151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.161 [2024-07-25 07:36:24.486189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ee3d0 with addr=10.0.0.2, port=4420 00:30:17.161 [2024-07-25 07:36:24.486208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee3d0 is same with the state(5) to be set 00:30:17.161 [2024-07-25 07:36:24.486448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ee3d0 (9): Bad file descriptor 00:30:17.161 [2024-07-25 07:36:24.486669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:17.161 [2024-07-25 07:36:24.486679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:17.161 [2024-07-25 07:36:24.486686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:17.161 [2024-07-25 07:36:24.487554] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.161 [2024-07-25 07:36:24.490193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.161 07:36:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 278762 00:30:17.161 [2024-07-25 07:36:24.499290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:17.422 [2024-07-25 07:36:24.532861] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:27.421 00:30:27.421 Latency(us) 00:30:27.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.421 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:27.421 Verification LBA range: start 0x0 length 0x4000 00:30:27.421 Nvme1n1 : 15.04 8481.78 33.13 9736.49 0.00 6987.67 1310.72 45875.20 00:30:27.421 =================================================================================================================== 00:30:27.421 Total : 8481.78 33.13 9736.49 0.00 6987.67 1310.72 45875.20 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.421 rmmod nvme_tcp 00:30:27.421 rmmod nvme_fabrics 00:30:27.421 rmmod nvme_keyring 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 280025 ']' 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 280025 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 280025 ']' 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 280025 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 280025 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 280025' 00:30:27.421 killing process with pid 280025 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 280025 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 280025 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.421 07:36:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:28.365 00:30:28.365 real 0m27.833s 00:30:28.365 user 1m3.187s 00:30:28.365 sys 0m7.094s 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:28.365 ************************************ 00:30:28.365 END TEST nvmf_bdevperf 00:30:28.365 ************************************ 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.365 ************************************ 00:30:28.365 START TEST nvmf_target_disconnect 00:30:28.365 ************************************ 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:28.365 * Looking for test storage... 00:30:28.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:28.365 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:28.366 07:36:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:36.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:36.581 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.581 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:36.582 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:36.582 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:36.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:30:36.582 00:30:36.582 --- 10.0.0.2 ping statistics --- 00:30:36.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.582 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:30:36.582 00:30:36.582 --- 10.0.0.1 ping statistics --- 00:30:36.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.582 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:36.582 ************************************ 00:30:36.582 START TEST nvmf_target_disconnect_tc1 00:30:36.582 ************************************ 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:36.582 07:36:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.582 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.582 [2024-07-25 07:36:43.059530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-25 07:36:43.059591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2013e20 with addr=10.0.0.2, port=4420 00:30:36.582 [2024-07-25 07:36:43.059624] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:36.582 [2024-07-25 07:36:43.059640] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:36.582 [2024-07-25 07:36:43.059648] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:36.582 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:36.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:36.582 Initializing NVMe Controllers 00:30:36.582 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:36.582 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:36.582 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:36.582 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:36.582 00:30:36.582 real 0m0.119s 00:30:36.582 user 0m0.051s 00:30:36.582 sys 0m0.069s 00:30:36.582 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:36.582 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:36.582 ************************************ 00:30:36.582 END TEST nvmf_target_disconnect_tc1 00:30:36.582 ************************************ 00:30:36.582 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:36.583 ************************************ 00:30:36.583 START TEST nvmf_target_disconnect_tc2 00:30:36.583 ************************************ 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=286137 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 286137 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 286137 ']' 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:36.583 07:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.583 [2024-07-25 07:36:43.220160] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:30:36.583 [2024-07-25 07:36:43.220223] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.583 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.583 [2024-07-25 07:36:43.308176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.583 [2024-07-25 07:36:43.401611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.583 [2024-07-25 07:36:43.401673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.583 [2024-07-25 07:36:43.401682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.583 [2024-07-25 07:36:43.401689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.583 [2024-07-25 07:36:43.401695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.583 [2024-07-25 07:36:43.401856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:30:36.583 [2024-07-25 07:36:43.402015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:30:36.583 [2024-07-25 07:36:43.402176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:36.583 [2024-07-25 07:36:43.402176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.845 Malloc0 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.845 [2024-07-25 07:36:44.097751] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:36.845 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.846 [2024-07-25 07:36:44.138143] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=286174 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:36.846 07:36:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.108 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.031 07:36:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 286137 00:30:39.031 07:36:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 [2024-07-25 07:36:46.171690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Write completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 Read completed with error (sct=0, sc=8) 00:30:39.031 starting I/O failed 00:30:39.031 [2024-07-25 07:36:46.171936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.031 [2024-07-25 07:36:46.172268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.172286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.172743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.172756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.173087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.173098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.173605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.173619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.173930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.173943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.174538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.174577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.175057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.175071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.175490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.175529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.031 [2024-07-25 07:36:46.175990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.031 [2024-07-25 07:36:46.176004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.031 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.176472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.176511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.176860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.176874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.177418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.177458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.177938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.177951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.178444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.178483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.178954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.178968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.179526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.179564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.179985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.180000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.180540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.180583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.181058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.181072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.181524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.181564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.182040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.182054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.182585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.182624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.183095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.183109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.183360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.183373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.183800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.183811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.184277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.184288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.184763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.184775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.185234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.185246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.185705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.185717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.186183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.186194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.186654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.186667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.187083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.187095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.187409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.187421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.187859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.187870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.188332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.188343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.188792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.188803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.189145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.189156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.189506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.189518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.189932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.189944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.190374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.190385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.190803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.190814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.191163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.191174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.191528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.191540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.191879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.191889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.192319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.032 [2024-07-25 07:36:46.192330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.032 qpair failed and we were unable to recover it. 00:30:39.032 [2024-07-25 07:36:46.192790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.192800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.193262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.193274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.194143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.194165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.194606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.194622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.195087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.195100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.195566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.195580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.195963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.195977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.196338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.196352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.196821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.196835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.197277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.197291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.197551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.197568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.198046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.198059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.198443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.198460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.198688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.198704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.199169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.199183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.200080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.200105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.200568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.200583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.201049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.201063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.201527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.201540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.202004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.202019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.202477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.202490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.203552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.203578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.204046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.204060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.204416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.204430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.204778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.204791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.205233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.205247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.205710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.205724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.206188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.206211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.206671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.206688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.207119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.207137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.207682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.207739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.208186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.208217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.208671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.208688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.209128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.033 [2024-07-25 07:36:46.209146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.033 qpair failed and we were unable to recover it. 00:30:39.033 [2024-07-25 07:36:46.209616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.209638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.210034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.210051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.210567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.210625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.211113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.211135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.211666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.211723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.212087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.212110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.212531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.212550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.212976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.212994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.213553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.213611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.214052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.214074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.214453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.214472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.214807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.214825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.215267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.215284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.215717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.215734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.216159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.216176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.217257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.217300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.217813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.217840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.218324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.218347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.218814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.218842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.219177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.219214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.219679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.219701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.220140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.220162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.220491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.220516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.220986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.221008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.221472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.221494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.221924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.221945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.222399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.222420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.222740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.222763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.223228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.223250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.223722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.223743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.224192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.224219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.225326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.225364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.225843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.225866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.226322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.226345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.034 qpair failed and we were unable to recover it. 00:30:39.034 [2024-07-25 07:36:46.226831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.034 [2024-07-25 07:36:46.226852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.227304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.227326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.227759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.227781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.228224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.228247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.228724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.228753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.229229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.229260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.229624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.229654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.230096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.230125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.230617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.230647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.231089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.231117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.231572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.231602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.231970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.231999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.232508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.232538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.233025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.233054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.233603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.233633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.234154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.234182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.234690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.234720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.235218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.235248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.235747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.235776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.236440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.236531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.237064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.237101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.237573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.237606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.238094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.238123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.238531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.238562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.239044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.239085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.239556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.239587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.240061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.240090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.240567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.240596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.241078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.241108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.242775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.242830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.243336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.243369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.243818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.243848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.244330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.244359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.244854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.035 [2024-07-25 07:36:46.244882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.035 qpair failed and we were unable to recover it. 00:30:39.035 [2024-07-25 07:36:46.245367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.245398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.245764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.245797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.247389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.247439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.247945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.247977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.248450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.248480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.248966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.248995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.249478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.249509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.249974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.250003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.250558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.250649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.251236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.251278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.251781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.251812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.252304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.252334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.252806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.252834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.253309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.253341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.253828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.253857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.254341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.254370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.254850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.254880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.255364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.255394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.255853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.255883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.256360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.256389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.258031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.258085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.258583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.258617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.259085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.259114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.259576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.259605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.260092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.260121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.260580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.260609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.261098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.261127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.261613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.036 [2024-07-25 07:36:46.261646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.036 qpair failed and we were unable to recover it. 00:30:39.036 [2024-07-25 07:36:46.262097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.262126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.262581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.262612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.263060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.263097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.263466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.263503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.263983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.264013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.264471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.264501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.264872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.264908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.265262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.265292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.265789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.265818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.266267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.266299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.266784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.266813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.267175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.267219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.267694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.267724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.268215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.268245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.268735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.268764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.269248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.269280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.269793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.269822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.270401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.270492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.271032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.271068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.271557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.271589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.273172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.273238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.273660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.273695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.274178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.274225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.274702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.274731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.275183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.275222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.275699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.275728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.276217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.276248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.276724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.276752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.277251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.277281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.279395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.279456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.279962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.279994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.281480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.281529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.282027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.282059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.282508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.282540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.283055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.283084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.037 [2024-07-25 07:36:46.283551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.037 [2024-07-25 07:36:46.283581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.037 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.284049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.284078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.284623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.284654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.285104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.285132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.285629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.285660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.286144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.286172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.286554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.286589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.287039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.287076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.287553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.287587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.288073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.288102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.288587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.288618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.289103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.289133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.289617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.289648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.290013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.290048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.290505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.290536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.291265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.291304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.291788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.291836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.292464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.292501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.293024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.293054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.293423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.293457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.293911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.293940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.294412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.294442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.294934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.294963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.295447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.295476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.295851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.295880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.296362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.296391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.296880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.296909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.297407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.297436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.297912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.297941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.298439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.298469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.298954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.298983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.299353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.299386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.299852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.299881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.300347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.300377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.300857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.300887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.301376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.301405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.301778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.301807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.038 [2024-07-25 07:36:46.302306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.038 [2024-07-25 07:36:46.302335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.038 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.302832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.302860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.303336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.303365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.303844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.303873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.304248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.304277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.304566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.304594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.304972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.305000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.305520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.305549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.306038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.306068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.306626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.306656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.307097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.307132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.307626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.307657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.308134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.308162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.308676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.308706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.309189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.309226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.309583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.309618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.310070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.310099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.310600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.310630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.311146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.311176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.311466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.311500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.311968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.311996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.312387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.312417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.312907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.312936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.313428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.313458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.313924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.313953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.314330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.314365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.314843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.314872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.315258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.315289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.315671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.315704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.316082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.316111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.316605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.316635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.317128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.317157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.317631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.317661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.318141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.318170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.318670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.318700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.039 [2024-07-25 07:36:46.319086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.039 [2024-07-25 07:36:46.319116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.039 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.319606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.319637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.320133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.320164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.320674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.320705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.321062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.321093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.321712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.321741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.322112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.322140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.322512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.322542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.323024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.323053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.323475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.323504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.323975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.324004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.324390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.324419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.324865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.324894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.325381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.325411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.325907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.325936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.326328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.326363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.326716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.326744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.327246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.327276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.327764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.327793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.328277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.328307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.328799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.328828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.329318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.329347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.329829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.329857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.330282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.330312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.330797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.330828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.331326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.331356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.331845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.331875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.332353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.332383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.332657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.332688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.333176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.333212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.333710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.333739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.334221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.334252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.334756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.334785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.335274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.335304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.040 qpair failed and we were unable to recover it. 00:30:39.040 [2024-07-25 07:36:46.335818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.040 [2024-07-25 07:36:46.335847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.336297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.336326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.336808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.336836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.337198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.337236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.337710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.337740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.338243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.338272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.338722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.338751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.339232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.339261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.339757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.339786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.340276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.340305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.340783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.340811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.341177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.341212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.341697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.341725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.342218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.342248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.342764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.342793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.343409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.343502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.344087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.344124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.344491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.344523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.344998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.345027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.345584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.345678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.346451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.346543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.347091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.347138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.347628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.347660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.348146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.348175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.348660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.348690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.349181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.349215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.349631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.041 [2024-07-25 07:36:46.349660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.041 qpair failed and we were unable to recover it. 00:30:39.041 [2024-07-25 07:36:46.350217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.350247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.350757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.350786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.351284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.351317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.351843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.351872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d8c000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.352471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.352505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.352936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.352946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.353493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.353527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.353883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.353893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.354482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.354515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.354972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.354982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.355538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.355573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.355789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.355802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.356257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.356266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.356749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.356758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.357233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.357241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.357658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.357667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.358099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.358108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.358574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.358583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.359050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.359059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.359543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.359552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.359901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.359909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.360472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.360506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.360966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.360977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.361548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.361582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.362115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.362126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.362583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.362592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.363066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.363075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.363604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.363638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.364117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.364128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.364612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.364647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.365113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.365124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.365586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.365596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.366075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.366084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.366524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.366533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.366958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.042 [2024-07-25 07:36:46.366973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.042 qpair failed and we were unable to recover it. 00:30:39.042 [2024-07-25 07:36:46.367534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.367568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.368022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.368032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.368650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.368684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.369158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.369168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.369711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.369745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.370194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.370209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.370743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.370777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.371405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.371439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.371887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.371897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.372496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.372530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.372985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.372995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.373528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.373562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.374032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.374042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.374599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.374634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.375092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.375103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.375549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.375558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.376040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.376050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.376633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.376667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.377170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.377181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.377276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.377294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.377710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.377720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.378198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.378210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.378643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.378652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.379097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.379106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.379566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.379575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.380086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.380096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.380547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.380557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.381004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.381013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.381474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.381509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.382004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.382014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.382548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.043 [2024-07-25 07:36:46.382582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.043 qpair failed and we were unable to recover it. 00:30:39.043 [2024-07-25 07:36:46.382927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.382938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.383511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.383545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.384016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.384026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.384643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.384678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.385027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.385038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.385579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.385613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.386058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.386068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.386657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.386692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.387149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.387164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.387704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.387738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.388194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.388212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.388647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.388682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.389150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.389161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.389521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.389557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.389879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.389888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.390360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.390369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.390758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.390767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.391234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.391244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.391589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.391598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.044 [2024-07-25 07:36:46.392047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.044 [2024-07-25 07:36:46.392056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.044 qpair failed and we were unable to recover it. 00:30:39.314 [2024-07-25 07:36:46.392404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.314 [2024-07-25 07:36:46.392416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.314 qpair failed and we were unable to recover it. 00:30:39.314 [2024-07-25 07:36:46.392867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.314 [2024-07-25 07:36:46.392876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.314 qpair failed and we were unable to recover it. 00:30:39.314 [2024-07-25 07:36:46.393310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.314 [2024-07-25 07:36:46.393320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.314 qpair failed and we were unable to recover it. 00:30:39.314 [2024-07-25 07:36:46.393762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.314 [2024-07-25 07:36:46.393771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.314 qpair failed and we were unable to recover it. 00:30:39.314 [2024-07-25 07:36:46.394080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.314 [2024-07-25 07:36:46.394089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.394542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.394551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.395003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.395012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.395442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.395451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.395898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.395907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.396479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.396514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.396986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.396997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.397498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.397533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.397906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.397917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.398394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.398404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.398877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.398886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.399452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.399486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.399957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.399968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.400473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.400507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.400869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.400880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.401330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.401339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.401679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.401689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.402136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.402144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.402594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.402603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.403100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.403108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.403587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.403596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.404065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.404074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.404731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.404766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.405158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.405169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.405682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.405716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.406191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.406205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.406768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.406803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.407417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.407452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.407906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.407916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.408474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.408509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.408961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.408971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.409496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.409530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.409895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.409906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.410482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.410517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.411046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.411057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.411610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.411644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.412111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.412122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.315 [2024-07-25 07:36:46.412457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.315 [2024-07-25 07:36:46.412467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.315 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.412917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.412926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.413423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.413457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.413915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.413926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.414148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.414161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.414482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.414492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.414943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.414953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.415501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.415535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.415988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.415998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.416175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.416188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.416663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.416672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.417121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.417131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.417587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.417596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.418070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.418078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.418629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.418668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.419209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.419220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.419743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.419777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.420034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.420044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.420599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.420633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.421112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.421123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.421568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.421603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.422075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.422085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.422403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.422412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.422729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.422738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.423211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.423221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.423680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.423688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.424000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.424010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.424475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.424483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.424830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.424838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.425146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.425155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.425606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.425615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.426061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.426069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.426648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.426682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.427127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.427137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.427599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.427608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.428067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.428076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.428612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.428646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.429089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.429100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.316 [2024-07-25 07:36:46.429653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.316 [2024-07-25 07:36:46.429688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.316 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.429911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.429925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.430400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.430410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.430889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.430897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.431127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.431137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.431528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.431537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.432038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.432047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.432618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.432653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.433109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.433120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.433643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.433652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.434081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.434091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.434532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.434542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.434982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.434991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.435468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.435503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.436000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.436010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.436494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.436528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.436921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.436935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.437515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.437550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.438011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.438022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.438590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.438625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.439051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.439061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.439526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.439561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.440033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.440045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.440481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.440516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.440797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.440808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.441396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.441430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.441885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.441896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.442356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.442367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.442676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.442687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.443143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.443151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.443604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.443613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.444061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.444069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.444531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.444566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.445043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.445054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.445598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.445633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.446081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.446092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.446643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.446653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.317 qpair failed and we were unable to recover it. 00:30:39.317 [2024-07-25 07:36:46.447106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.317 [2024-07-25 07:36:46.447115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.447700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.447734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.448428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.448463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.448902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.448912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.449458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.449493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.449970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.449981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.450557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.450591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.451048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.451058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.451621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.451655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.452147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.452158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.452712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.452746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.453168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.453179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.453622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.453656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.453999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.454011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.454569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.454603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.454951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.454962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.455521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.455556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.455997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.456008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.456439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.456474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.456940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.456954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.457514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.457548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.457964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.457975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.458491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.458525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.458973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.458983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.459495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.459530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.459873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.459884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.460302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.460312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.460736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.461183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.461192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.461631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.461640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.461982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.461992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.462472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.462507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.462740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.462754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.463230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.463240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.463657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.463667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.464016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.464027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.464246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.464257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.464629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.318 [2024-07-25 07:36:46.464639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.318 qpair failed and we were unable to recover it. 00:30:39.318 [2024-07-25 07:36:46.465065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.465073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.465176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.465186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.465587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.465597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.466069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.466078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.466657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.466692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.467165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.467176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.467639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.467648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.468073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.468083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.468554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.468588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.468946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.468957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.469501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.469510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.469934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.469942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.470491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.470524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.470978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.470989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.471534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.471567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.472094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.472104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.472620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.472629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.473074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.473083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.473604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.473637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.474135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.474148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.474677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.474711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.475184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.475199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.475776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.475809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.476407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.476440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.476916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.476927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.477500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.477533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.478015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.478026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.478576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.478610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.478962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.478972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.479452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.479486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.319 qpair failed and we were unable to recover it. 00:30:39.319 [2024-07-25 07:36:46.479947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.319 [2024-07-25 07:36:46.479957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.480519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.480552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.481021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.481031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.481468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.481501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.481999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.482009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.482571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.482606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.482826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.482839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.483294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.483305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.483760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.483769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.483982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.483992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.484420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.484429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.484874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.484883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.485314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.485323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.485784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.485792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.486243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.486252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.486740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.486748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.486868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.486880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.487233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.487250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.487759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.487768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.487982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.487993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.488308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.488317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.488792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.488800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.489276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.489285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.489759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.489767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.490206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.490215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.490576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.490584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.491048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.491056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.491527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.491560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.492021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.492032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.492602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.492636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.493086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.493097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.493442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.493456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.493903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.493912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.494503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.494536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.494987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.494997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.495572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.495605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.496063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.496073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.496612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.320 [2024-07-25 07:36:46.496645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.320 qpair failed and we were unable to recover it. 00:30:39.320 [2024-07-25 07:36:46.497117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.497127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.497698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.497731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.498195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.498210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.498652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.498686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.499148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.499158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.499705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.499739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.500179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.500190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.500736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.500769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.501430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.501463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.501919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.501929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.502485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.502518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.502939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.502949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.503561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.503595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.504063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.504073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.504634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.504667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.505130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.505141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.505692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.505726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.506208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.506219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.506772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.506805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.507160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.507170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.507510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.507543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.507995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.508005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.508613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.508647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.509109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.509121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.509645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.509678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.510140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.510151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.510602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.510611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.511126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.511136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.511748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.511782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.512457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.512490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.512948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.512958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.513128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.513137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.513621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.513630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.514128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.514140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.514607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.514616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.514952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.514961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.515531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.515564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.321 [2024-07-25 07:36:46.516029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.321 [2024-07-25 07:36:46.516039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.321 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.516610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.516643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.517072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.517082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.517591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.517625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.517852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.517864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.518188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.518197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.518607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.518617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.518732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.518743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.519188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.519197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.519656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.519666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.520130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.520139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.520603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.520612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.521065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.521074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.521617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.521649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.522177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.522188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.522727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.522760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.523421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.523455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.523966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.523976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.524523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.524556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.524901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.524911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.525442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.525475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.525940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.525951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.526428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.526462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.526805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.526816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.527262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.527272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.527736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.527745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.527985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.527993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.528423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.528431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.528809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.528818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.529265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.529274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.529587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.529596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.530024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.530032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.530385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.530395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.530672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.530680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.531143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.531152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.531496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.531505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.531942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.531952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.532396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.532404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.322 [2024-07-25 07:36:46.532881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.322 [2024-07-25 07:36:46.532890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.322 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.533356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.533365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.533833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.533842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.534332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.534341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.534715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.534724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.535184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.535192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.535643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.536092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.536101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.536584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.536592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.537058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.537068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.537601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.537634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.538098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.538109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.538661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.538694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.539115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.539126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.539570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.539580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.540032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.540041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.540649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.540682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.541147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.541158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.541714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.541748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.542210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.542221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.542739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.542772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.543139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.543149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.543727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.543760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.544376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.544410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.544876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.544886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.545488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.545522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.545976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.545987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.546554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.546587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.547057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.547067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.547622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.547656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.548094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.548104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.548673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.548706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.549141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.549151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.549598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.549607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.550078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.550086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.550457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.550467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.550923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.550933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.551503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.551537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.323 qpair failed and we were unable to recover it. 00:30:39.323 [2024-07-25 07:36:46.551995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.323 [2024-07-25 07:36:46.552008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.552532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.552565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.552969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.552979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.553444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.553478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.553934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.553944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.554509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.554542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.554987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.554997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.555474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.555508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.555970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.555981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.556554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.556587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.557036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.557046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.557597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.557631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.558091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.558101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.558579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.558588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.559055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.559064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.559500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.559534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.559960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.559971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.560490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.560524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.560977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.560989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.561484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.561518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.561973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.561983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.562556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.562589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.563031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.563041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.563480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.563513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.563960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.563970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.564540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.564573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.565050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.565061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.565629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.565663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.566122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.566132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.566509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.566543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.567005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.567015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.567422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.567455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.324 [2024-07-25 07:36:46.567912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.324 [2024-07-25 07:36:46.567923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.324 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.568140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.568147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.568597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.568605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.569066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.569076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.569426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.569459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.569916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.569926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.570508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.570541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.570775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.570788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.571239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.571252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.571641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.571649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.572094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.572103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.572564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.572574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.572917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.572925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.573368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.573377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.573810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.573820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.574295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.574305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.574791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.574800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.575242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.575252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.575692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.575701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.576160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.576168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.576518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.576527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.576963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.576971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.577404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.577413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.577879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.577887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.578244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.578253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.578588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.578598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.578895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.578903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.579197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.579210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.579541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.579549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.579986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.579995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.580326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.580337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.580748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.580756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.581216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.581225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.581735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.581743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.582090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.582099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.582372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.582381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.582863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.582873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.583319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.583328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.325 [2024-07-25 07:36:46.583768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.325 [2024-07-25 07:36:46.583777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.325 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.584076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.584086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.584531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.584539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.584974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.584983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.585412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.585421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.585889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.585898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.586334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.586343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.586792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.586801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.587015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.587028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.587384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.587393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.587932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.587943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.588063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.588073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.588563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.588572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.588925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.588933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.589371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.589379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.589793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.589803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.590269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.590279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.590675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.590684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.591130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.591138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.591406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.591415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.591893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.591902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.592303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.592311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.592764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.592772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.593217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.593226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.593610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.593619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.594075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.594083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.594540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.594550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.594901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.594909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.595369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.595377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.595841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.595850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.596076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.596083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.596479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.596487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.596944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.596952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.597426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.597435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.597874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.597884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.598336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.598345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.598730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.598740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.599199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.599212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.326 qpair failed and we were unable to recover it. 00:30:39.326 [2024-07-25 07:36:46.599679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.326 [2024-07-25 07:36:46.599688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.600141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.600150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.600651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.600683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.601147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.601158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.601562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.601571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.602015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.602023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.602614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.602646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.603133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.603143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.603600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.603633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.604090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.604101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.604575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.604585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.605036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.605045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.605519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.605555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.606008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.606018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.606590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.606623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.607095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.607105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.607533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.607542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.607980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.607988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.608483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.608515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.608861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.608872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.609453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.609485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.609936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.609946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.610521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.610553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.611047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.611057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.611606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.611638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.612092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.612103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.612568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.612577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.613022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.613031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.613503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.613536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.614000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.614010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.614464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.614496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.614944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.614954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.615515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.615547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.616007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.616017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.616570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.616603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.617052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.617063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.617491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.617524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.618002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.327 [2024-07-25 07:36:46.618012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.327 qpair failed and we were unable to recover it. 00:30:39.327 [2024-07-25 07:36:46.618503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.618535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.619012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.619024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.619466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.619499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.619953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.619963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.620565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.620597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.621070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.621080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.621647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.621678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.622155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.622165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.622686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.622719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.623146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.623157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.623581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.623613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.624075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.624085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.624538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.624548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.624666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.624680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.625135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.625147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.625281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.625291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.625631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.625640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.626165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.626173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.626618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.626628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.627077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.627086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.627531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.627539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.627977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.627985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.628295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.628303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.628791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.628799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.629153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.629161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.629649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.629658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.630002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.630010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.630611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.630644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.631000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.631011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.631485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.631517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.631868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.631878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.632335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.632344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.328 [2024-07-25 07:36:46.632830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.328 [2024-07-25 07:36:46.632839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.328 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.633359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.633368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.633824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.633833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.634177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.634186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.634644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.634652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.635114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.635124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.635514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.635522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.635966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.635975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.636542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.636574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.637022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.637033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.637590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.637623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.637969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.637979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.638540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.638572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.638925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.638935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.639506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.639539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.640001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.640011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.640579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.640611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.640961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.640972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.641503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.641535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.642010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.642021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.642579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.642611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.643075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.643085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.643597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.643606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.643961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.643970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.644550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.644582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.645048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.645058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.645530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.645562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.646018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.646028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.646597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.646630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.647084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.647095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.647184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.647196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.647649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.647658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.648174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.648182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.648716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.648749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.649425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.649458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.649933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.649943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.650521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.650554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.650905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.650916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.329 qpair failed and we were unable to recover it. 00:30:39.329 [2024-07-25 07:36:46.651460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.329 [2024-07-25 07:36:46.651492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.651974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.651983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.652549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.652581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.653078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.653088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.653532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.653541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.653991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.654000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.654542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.654574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.655017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.655028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.655574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.655607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.656063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.656073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.656621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.656654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.657113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.657126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.657565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.657597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.658051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.658062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.658644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.658676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.659195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.659210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.659741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.659773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.660209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.660220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.660807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.660841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.661435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.661468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.661853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.661864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.662439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.662471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.662808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.662818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.663258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.663267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.663696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.663704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.664198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.664208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.664621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.664630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.665052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.665061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.665607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.665639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.666138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.666148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.666707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.666740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.667407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.667439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.667879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.667889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.668432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.668465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.668914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.668925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.669432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.669464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.669931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.669941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.670475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.330 [2024-07-25 07:36:46.670508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.330 qpair failed and we were unable to recover it. 00:30:39.330 [2024-07-25 07:36:46.670870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.331 [2024-07-25 07:36:46.670881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.331 qpair failed and we were unable to recover it. 00:30:39.331 [2024-07-25 07:36:46.671369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.331 [2024-07-25 07:36:46.671378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.331 qpair failed and we were unable to recover it. 00:30:39.331 [2024-07-25 07:36:46.671764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.331 [2024-07-25 07:36:46.671774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.331 qpair failed and we were unable to recover it. 00:30:39.331 [2024-07-25 07:36:46.672219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.331 [2024-07-25 07:36:46.672228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.331 qpair failed and we were unable to recover it. 00:30:39.331 [2024-07-25 07:36:46.672697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.331 [2024-07-25 07:36:46.672706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.331 qpair failed and we were unable to recover it. 00:30:39.331 [2024-07-25 07:36:46.673184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.331 [2024-07-25 07:36:46.673192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.331 qpair failed and we were unable to recover it. 00:30:39.600 [2024-07-25 07:36:46.673707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.600 [2024-07-25 07:36:46.673717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.600 qpair failed and we were unable to recover it. 00:30:39.600 [2024-07-25 07:36:46.674181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.600 [2024-07-25 07:36:46.674190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.600 qpair failed and we were unable to recover it. 00:30:39.600 [2024-07-25 07:36:46.674642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.600 [2024-07-25 07:36:46.674652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.600 qpair failed and we were unable to recover it. 00:30:39.600 [2024-07-25 07:36:46.675006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.600 [2024-07-25 07:36:46.675015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.600 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.675546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.675578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.676041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.676052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.676596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.676628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.677142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.677155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.677679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.677711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.678169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.678180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.678762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.678795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.679433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.679466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.679938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.679949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.680429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.680462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.680652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.680664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.681142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.681151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.681559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.681568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.682042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.682051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.682615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.682647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.683116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.683125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.683316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.683330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.683774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.683783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.684007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.684018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.684458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.684467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.684940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.684949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.685520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.685553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.686002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.686012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.686544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.686577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.687044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.687054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.687606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.687638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.688089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.688099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.688600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.688609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.689065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.689075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.689608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.689640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.690098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.690108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.690646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.690678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.691142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.691152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.691687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.691720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.692173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.692184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.692716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.692750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.693199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.601 [2024-07-25 07:36:46.693213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.601 qpair failed and we were unable to recover it. 00:30:39.601 [2024-07-25 07:36:46.693765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.693798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.694416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.694449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.694910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.694920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.695492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.695524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.696044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.696055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.696536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.696568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.697033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.697047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.697509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.697541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.698000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.698010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.698550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.698582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.699081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.699091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.699547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.699556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.700001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.700010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.700241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.700253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.700670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.700680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.700899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.700909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.701425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.701458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.701932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.701942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.702506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.702540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.703005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.703015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.703577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.703609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.704067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.704077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.704602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.704634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.705066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.705076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.705611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.705644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.706098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.706107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.706648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.706680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.706874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.706885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.707349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.707358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.707802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.707812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.708279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.708287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.708711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.708720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.709164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.709172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.709610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.709620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.710082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.710091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.710523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.710532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.710967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.710976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.602 [2024-07-25 07:36:46.711493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.602 [2024-07-25 07:36:46.711525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.602 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.711977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.711987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.712543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.712575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.713020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.713030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.713504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.713537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.714001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.714010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.714540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.714572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.715025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.715035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.715575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.715607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.716074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.716088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.716510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.716519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.716958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.716968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.717492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.717524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.717995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.718005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.718520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.718553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.719001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.719011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.719546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.719580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.720049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.720059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.720613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.720645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.721096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.721106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.721643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.721676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.722104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.722115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.722560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.722570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.723025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.723033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.723384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.723416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.723881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.723891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.724451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.724483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.724931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.724942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.725507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.725539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.726005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.726015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.726581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.726613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.727065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.727075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.727644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.727676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.728147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.728158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.728686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.728718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.729169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.729179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.729721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.729753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.730123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.603 [2024-07-25 07:36:46.730134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.603 qpair failed and we were unable to recover it. 00:30:39.603 [2024-07-25 07:36:46.730467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.730499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.730961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.730972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.731518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.731550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.732023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.732033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.732600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.732632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.733090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.733101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.733564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.733573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.734038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.734047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.734604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.734637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.734971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.734981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.735597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.735629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.736091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.736105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.736573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.736583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.737029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.737037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.737584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.737616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.738080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.738090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.738521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.738531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.738978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.738987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.739576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.739609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.740099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.740110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.740627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.740636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.741163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.741172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.741693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.741725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.742188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.742199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.742716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.742748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.743210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.743222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.743523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.743554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.743997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.744008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.744615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.744647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.745171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.745182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.745665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.745697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.746164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.746174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.746512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.746543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.746979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.746989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.747530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.747562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.747988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.747998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.748230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.748243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.748695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.604 [2024-07-25 07:36:46.748704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.604 qpair failed and we were unable to recover it. 00:30:39.604 [2024-07-25 07:36:46.749017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.749027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.749569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.749601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.750098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.750108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.750543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.750552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.750998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.751008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.751567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.751599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.752062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.752073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.752618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.752650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.753101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.753111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.753652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.753684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.754147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.754157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.754742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.754775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.755411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.755443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.755917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.755930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.756526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.756912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.756922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.757481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.757513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.757976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.757986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.758554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.758586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.759039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.759049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.759585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.759619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.760083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.760094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.760553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.760562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.760911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.760921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.761448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.761480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.761919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.761929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.762484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.762516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.762974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.762984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.763524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.763557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.764027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.764037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.605 [2024-07-25 07:36:46.764493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.605 [2024-07-25 07:36:46.764525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.605 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.764982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.764992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.765575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.765608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.766079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.766090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.766557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.766566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.766995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.767003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.767439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.767472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.767939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.767950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.768481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.768513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.768970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.768981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.769535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.769567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.770034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.770045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.770500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.770532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.770986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.770996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.771467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.771499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.771963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.771974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.772470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.772503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.772955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.772965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.773500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.773532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.773962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.773972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.774461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.774493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.774949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.774960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.775499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.775531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.775994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.776575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.776608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.777072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.777081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.777612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.777644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.778114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.778124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.778646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.778678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.779140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.779150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.779599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.779608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.780075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.780083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.780609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.780642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.781094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.781105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.781637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.781669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.782134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.782144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.782544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.782553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.783000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.783009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.606 qpair failed and we were unable to recover it. 00:30:39.606 [2024-07-25 07:36:46.783547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.606 [2024-07-25 07:36:46.783579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.784005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.784015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.784544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.784577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.785028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.785039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.785574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.785605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.786032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.786044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.786604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.786637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.787093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.787104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.787655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.787688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.788161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.788171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.788728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.788760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.789222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.789244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.789567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.789577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.790010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.790018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.790473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.790482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.790922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.790931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.791469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.791501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.791939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.791950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.792482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.792515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.792971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.792981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.793541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.793573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.793795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.793807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.794216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.794226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.794449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.794461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.794899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.794907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.795367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.795380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.795843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.795853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.796296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.796304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.796748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.796757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.797216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.797225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.797661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.797669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.798111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.798121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.798547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.798557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.798773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.798783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.799094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.799104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.799469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.799478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.799944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.799952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.800462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.607 [2024-07-25 07:36:46.800471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.607 qpair failed and we were unable to recover it. 00:30:39.607 [2024-07-25 07:36:46.800676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.800687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.801131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.801140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.801610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.801619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.802085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.802095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.802556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.802564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.803005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.803015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.803454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.803462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.803916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.803924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.804512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.804544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.804999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.805010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.805592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.805624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.806093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.806103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.806639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.806649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.807093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.807101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.807568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.807577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.808045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.808053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.808633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.808665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.809114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.809124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.809551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.809584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.810047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.810057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.810624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.810657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.811098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.811110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.811577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.811586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.812046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.812055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.812616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.812648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.813100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.813111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.813642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.813674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.814142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.814156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.814634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.814644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.815094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.815102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.815643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.815676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.816140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.816151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.816593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.816602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.817040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.817049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.817584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.817617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.818300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.818321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.818766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.818776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.608 [2024-07-25 07:36:46.819116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.608 [2024-07-25 07:36:46.819125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.608 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.819591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.819600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.820072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.820081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.820611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.820643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.821106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.821118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.821575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.821584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.821928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.821938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.822507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.822539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.822893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.822903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.823446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.823478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.823950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.823960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.824509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.824541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.825011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.825021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.825472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.825482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.825924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.825932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.826418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.826436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.826896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.826906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.827369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.827378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.827842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.827850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.828256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.828265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.828675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.828684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.829125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.829133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.829573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.829582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.830050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.830059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.830362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.830372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.830807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.830816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.831283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.831292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.831751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.831761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.832081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.832090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.832532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.832542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.832967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.832978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.833447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.833455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.833896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.833905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.834344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.834353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.834781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.834790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.835242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.835251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.835706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.835714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.836155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.836164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.609 [2024-07-25 07:36:46.836522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.609 [2024-07-25 07:36:46.836531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.609 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.836991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.837000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.837464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.837496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.837945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.837956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.838471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.838503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.838965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.838975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.839544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.839576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.840032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.840042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.840492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.840524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.840985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.840995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.841551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.841584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.842029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.842040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.842605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.842638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.843018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.843029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.843566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.843599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.844047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.844058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.844595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.844627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.845093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.845103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.845611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.845643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.846117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.846128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.846583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.846593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.847060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.847069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.847606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.847639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.848164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.848175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.848694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.848727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.849194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.849218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.849628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.849659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.850113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.850125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.850659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.850691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.851165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.851176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.851783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.851815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.852414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.852446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.852874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.610 [2024-07-25 07:36:46.852887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.610 qpair failed and we were unable to recover it. 00:30:39.610 [2024-07-25 07:36:46.853487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.853519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.853969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.853980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.854545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.854577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.855006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.855016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.855245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.855260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.855728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.855737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.856177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.856185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.856691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.856700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.857125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.857134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.857675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.857707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.858161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.858172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.858616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.858625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.859126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.859134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.859670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.859703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.860152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.860163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.860486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.860517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.861032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.861044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.861474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.861506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.861858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.861868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.862339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.862348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.862847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.862855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.863401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.863434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.863931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.863941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.864418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.864427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.864893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.864902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.865443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.865475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.865948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.865959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.866520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.866553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.867016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.867026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.867567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.867599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.868051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.868062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.868624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.868656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.869121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.869131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.869665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.869697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.870155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.870166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.870698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.870730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.871193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.871207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.611 [2024-07-25 07:36:46.871737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.611 [2024-07-25 07:36:46.871770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.611 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.872367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.872399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.872825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.872840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.873421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.873454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.873906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.873916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.874362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.874372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.874845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.874854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.875439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.875470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.875963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.875974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.876510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.876543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.876969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.876980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.877544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.877576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.877931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.877941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.878384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.878393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.878852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.878860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.879410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.879443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.879912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.879924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.880390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.880400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.880836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.880846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.881281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.881291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.881765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.881774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.882248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.882258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.882732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.882741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.883199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.883211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.883634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.883643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.884084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.884093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.884555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.884564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.885020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.885029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.885479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.885511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.885866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.885877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.886338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.886347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.886779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.886788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.887223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.887232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.887678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.887686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.888148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.888157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.888629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.888637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.889083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.889092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.889534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.889543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.612 [2024-07-25 07:36:46.890002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.612 [2024-07-25 07:36:46.890011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.612 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.890563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.890595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.891046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.891056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.891616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.891649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.892111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.892121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.892562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.892571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.893021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.893029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.893577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.893610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.894074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.894084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.894549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.894559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.895072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.895081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.895607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.895640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.896109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.896120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.896636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.896668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.897121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.897131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.897581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.897590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.898056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.898065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.898606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.898638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.899137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.899147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.899723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.899755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.900374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.900406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.900877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.900888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.901452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.901484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.901705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.902162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.902171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.902520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.902529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.902973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.902982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.903450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.903459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.903921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.903930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.904481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.904513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.904968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.904978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.905549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.905585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.906050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.906061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.906609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.906641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.906866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.906878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.907328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.907337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.907811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.907820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.908283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.613 [2024-07-25 07:36:46.908292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.613 qpair failed and we were unable to recover it. 00:30:39.613 [2024-07-25 07:36:46.908737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.908746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.908960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.908970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.909176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.909186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.909622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.909630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.910069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.910077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.910637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.910669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.911138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.911148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.911372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.911385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.911832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.911841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.912063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.912074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.912500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.912509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.912972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.912980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.913421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.913453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.913908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.913918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.914482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.914515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.914979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.914989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.915523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.915555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.916006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.916016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.916582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.916614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.917077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.917088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.917563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.917572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.918017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.918026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.918589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.918622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.919087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.919097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.919555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.919565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.920004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.920013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.920480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.920512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.920982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.920992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.921552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.921585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.922038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.922048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.922610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.922642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.923110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.923121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.614 [2024-07-25 07:36:46.923656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.614 [2024-07-25 07:36:46.923688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.614 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.924145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.924159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.924684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.924717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.925182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.925193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.925743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.925775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.926417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.926450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.926908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.926918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.927159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.927167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.927615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.927623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.928063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.928072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.928620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.928651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.929005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.929015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.929551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.929583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.930035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.930045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.930612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.930645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.931000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.931011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.931568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.931601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.931950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.931960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.932449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.932481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.932912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.932922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.933483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.933515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.933962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.933972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.934539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.934571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.935039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.935049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.935592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.935624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.936077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.936087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.936566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.936575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.937043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.937052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.937592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.937625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.938079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.938090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.938461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.938470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.938934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.938944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.939487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.939519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.939969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.939979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.940538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.940570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.941037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.941047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.941589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.941621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.942076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.942086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.615 qpair failed and we were unable to recover it. 00:30:39.615 [2024-07-25 07:36:46.942527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.615 [2024-07-25 07:36:46.942537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.943004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.943013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.943553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.943586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.944048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.944061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.944607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.944639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.945104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.945114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.945659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.945692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.946143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.946153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.946583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.946593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.947060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.947069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.947596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.947628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.948079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.948090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.948679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.948711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.949174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.949184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.949642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.949675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.950006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.950017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.950595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.950628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.951103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.951115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.951546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.951555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.952000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.952009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.952557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.952590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.953058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.953068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.953607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.953639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.954090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.954100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.954536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.954568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.954790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.954802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.955244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.955253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.955596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.955606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.956076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.956084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.956570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.956579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.956884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.956893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.957343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.957352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.957822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.957831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.958048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.958060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.958498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.958507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.958948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.958957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.959486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.959518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.616 [2024-07-25 07:36:46.959984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.616 [2024-07-25 07:36:46.959994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.616 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.960536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.960569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.961021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.961033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.961629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.961662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.962090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.962100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.962563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.962572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.963064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.963076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.963616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.963648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.886 [2024-07-25 07:36:46.964120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.886 [2024-07-25 07:36:46.964130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.886 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.964659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.964692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.965140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.965150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.965680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.965713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.966147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.966157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.966595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.966629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.967080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.967090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.967558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.967566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.968035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.968043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.968581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.968613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.969065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.969076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.969635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.969668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.970137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.970147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.970689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.970721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.971174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.971184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.971737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.971770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.972377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.972411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.972742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.972752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.973206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.973216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.973659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.973668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.974118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.974127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.974651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.974683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.975138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.975148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.975579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.975588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.975922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.975932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.976542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.976574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.977025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.977036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.977620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.977653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.978116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.978126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.978655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.978688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.979147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.979157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.979594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.979603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.980065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.980073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.887 qpair failed and we were unable to recover it. 00:30:39.887 [2024-07-25 07:36:46.980673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.887 [2024-07-25 07:36:46.980706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.981158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.981168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.981673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.981705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.982169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.982180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.982704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.982736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.983190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.983209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.983763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.983795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.984364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.984396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.984848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.984859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.985395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.985427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.985857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.985867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.986433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.986465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.986916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.986927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.987258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.987267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.987710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.987718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.988178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.988186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.988628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.988636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.989093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.989101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.989551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.989560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.990023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.990033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.990470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.990502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.990957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.990967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.991528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.991561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.992023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.992034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.992568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.992601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.993063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.993075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.993635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.993668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.994120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.994130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.994664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.994696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.995153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.995715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.995748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.996399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.996431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.996886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.996897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.997436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.997469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.997935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.997945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.888 [2024-07-25 07:36:46.998468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.888 [2024-07-25 07:36:46.998500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.888 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:46.998943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:46.998953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:46.999485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:46.999517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:46.999944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:46.999954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.000512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.000544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.000996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.001008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.001549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.001581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.002046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.002057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.002612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.002644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.003138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.003148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.003688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.003724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.004194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.004209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.004714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.004746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.005093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.005104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.005599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.005608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.006085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.006093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.006557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.006566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.007007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.007015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.007550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.007583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.008052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.008063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.008634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.008667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.008890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.008904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.009465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.009499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.009957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.009967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.010193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.010213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.010672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.010681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.011123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.011131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.011657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.011690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.012039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.012050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.012593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.012625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.013075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.013085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.013547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.013556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.014017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.014026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.014564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.014598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.015132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.015143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.015664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.015696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.016164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.016174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.016708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.889 [2024-07-25 07:36:47.016740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.889 qpair failed and we were unable to recover it. 00:30:39.889 [2024-07-25 07:36:47.017196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.017211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.017731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.017764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.018409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.018441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.018901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.018911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.019535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.019569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.020033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.020043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.020602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.020634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.021089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.021099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.021565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.021574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.022034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.022042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.022595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.022627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.023084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.023095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.023602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.023638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.024099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.024110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.024551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.024561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.025001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.025010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.025544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.025576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.026039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.026050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.026605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.026637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.027063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.027075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.027618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.027650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.028125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.028136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.028660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.028693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.029137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.029148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.029687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.029720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.030151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.030162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.030721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.030754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.031199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.031216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.031631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.031639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.031859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.031870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.032435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.032467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.032917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.032927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.033148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.033161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.033490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.033499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.033850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.033859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.890 qpair failed and we were unable to recover it. 00:30:39.890 [2024-07-25 07:36:47.034309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.890 [2024-07-25 07:36:47.034317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.034762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.034771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.035227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.035235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.035660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.035670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.036114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.036123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.036572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.036581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.037040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.037049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.037510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.037519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.037961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.037969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.038419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.038452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.038924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.038935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.039496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.039529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.039980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.039991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.040531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.040564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.040987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.040997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.041557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.041589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.042033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.042043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.042587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.042623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.043091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.043101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.043565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.043574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.044019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.044028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.044562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.044594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.045061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.045073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.045661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.045694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.046148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.046158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.046693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.046725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.047192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.047207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.047724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.047756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.891 [2024-07-25 07:36:47.048214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.891 [2024-07-25 07:36:47.048224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.891 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.048722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.048754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.049412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.049444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.049914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.049924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.050457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.050489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.050941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.050951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.051513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.051545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.051878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.051889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.052431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.052464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.052915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.052926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.053357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.053366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.053834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.053842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.054332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.054341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.054787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.054795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.055259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.055267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.055740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.055749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.056054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.056064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.056495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.056504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.056962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.056971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.057521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.057554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.058024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.058034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.058395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.058427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.058758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.058769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.059250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.059259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.059708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.059717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.060167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.060175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.060592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.060601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.061066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.061074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.061607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.061638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.061968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.061982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.062502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.062535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.062756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.062767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.063224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.063234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.063688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.063697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.064166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.064174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.064613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.064622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.065064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.065073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.065603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.892 [2024-07-25 07:36:47.065635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.892 qpair failed and we were unable to recover it. 00:30:39.892 [2024-07-25 07:36:47.066102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.066113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.066555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.066565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.067007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.067016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.067546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.067579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.068043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.068053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.068647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.068680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.069124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.069134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.069669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.069701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.070171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.070182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.070741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.070773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.071414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.071446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.071896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.071906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.072437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.072470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.072935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.072945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.073481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.073514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.074011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.074022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.074543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.074575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.075037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.075047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.075590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.075623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.076074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.076086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.076436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.076444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.076917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.076927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.077469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.077502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.077954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.077965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.078526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.078560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.079029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.079039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.079584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.079616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.080069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.080080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.080628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.080660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.081129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.081139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.081482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.081514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.081967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.081981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.082510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.082542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.083042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.083052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.083590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.083623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.893 qpair failed and we were unable to recover it. 00:30:39.893 [2024-07-25 07:36:47.084077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.893 [2024-07-25 07:36:47.084087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.084535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.084544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.085016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.085025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.085555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.085588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.086040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.086050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.086608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.086640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.087105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.087115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.087632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.087664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.088119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.088130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.088569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.088577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.089045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.089055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.089604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.089637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.089968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.089978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.090498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.090530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.090996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.091007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.091552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.091584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.092041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.092051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.092612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.092645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.093119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.093129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.093671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.093704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.094157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.094167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.094721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.094754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.095174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.095184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.095710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.095742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.096195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.096212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.096751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.096783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.097410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.097442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.097895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.097905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.098413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.098445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.098910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.098920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.099460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.099493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.099943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.099954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.100494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.100526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.100996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.101007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.101534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.101566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.894 qpair failed and we were unable to recover it. 00:30:39.894 [2024-07-25 07:36:47.102018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.894 [2024-07-25 07:36:47.102028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.102570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.102606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.103036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.103047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.103489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.103522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.103974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.103986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.104542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.104575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.105040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.105050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.105615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.105647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.105978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.105989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.106521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.106554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.107019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.107029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.107460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.107493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.107967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.107978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.108513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.108545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.109010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.109020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.109546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.109578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.110030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.110040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.110485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.110518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.110946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.110957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.111536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.111570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.112019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.112029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.112572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.112604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.113069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.113079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.113542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.113574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.114024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.114034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.114579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.114612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.114969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.114980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.115524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.115557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.116009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.116020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.116565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.116598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.117069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.117080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.117629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.117661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.118113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.118123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.118659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.118691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.118858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.118870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.119344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.895 [2024-07-25 07:36:47.119355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.895 qpair failed and we were unable to recover it. 00:30:39.895 [2024-07-25 07:36:47.119802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.119811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.120252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.120260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.120492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.120503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.120936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.120945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.121394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.121403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.121846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.121858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.122350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.122359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.122779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.122787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.123229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.123238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.123675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.123684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.124148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.124157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.124587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.124597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.125036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.125046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.125530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.125539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.125988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.125996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.126448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.126481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.126846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.126856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.127420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.127453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.127919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.127929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.128243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.128253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.128698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.128707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.129189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.129197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.129604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.129613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.130059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.130068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.130599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.130631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.131131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.131141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.131664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.131695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.132162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.132173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.132739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.132770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.133386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.896 [2024-07-25 07:36:47.133418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.896 qpair failed and we were unable to recover it. 00:30:39.896 [2024-07-25 07:36:47.133885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.133896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.134437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.134468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.134918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.134928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.135518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.135550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.136014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.136024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.136619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.136650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.137177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.137187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.137722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.137754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.138402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.138434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.138903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.138913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.139413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.139445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.139894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.139904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.140464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.140494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.140959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.140969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.141510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.141541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.141979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.141992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.142585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.142617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.143041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.143051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.143605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.143637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.144089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.144099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.144580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.144589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.145054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.145062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.145601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.145632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.146071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.146081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.146640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.146671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.147136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.147148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.147596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.147627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.148086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.148097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.148548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.148558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.149024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.149608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.149640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.150094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.150105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.897 [2024-07-25 07:36:47.150636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.897 [2024-07-25 07:36:47.150667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.897 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.151130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.151141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.151398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.151406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.151869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.151877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.152227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.152237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.152713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.152722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.153176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.153184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.153626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.153635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.154062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.154070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.154627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.154659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.155111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.155125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.155579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.155588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.156051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.156059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.156615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.156646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.157094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.157106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.157635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.157666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.158132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.158143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.158482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.158491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.158935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.158945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.159527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.159559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.159987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.159996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.160564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.160595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.161049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.161059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.161644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.161675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.162142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.162152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.162676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.162707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.163161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.163171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.163729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.163761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.164411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.164442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.164905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.164915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.165411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.165443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.165900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.165910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.166424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.166457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.166921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.166931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.167486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.167517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.167969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.898 [2024-07-25 07:36:47.167979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.898 qpair failed and we were unable to recover it. 00:30:39.898 [2024-07-25 07:36:47.168536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.168567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.168999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.169009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.169596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.169627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.170071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.170081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.170639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.170671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.171149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.171159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.171668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.171699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.172153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.172163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.172613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.172645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.173116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.173126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.173578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.173588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.174033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.174042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.174601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.174634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.175134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.175145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.175678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.175714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.176155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.176165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.176724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.176753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.177414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.177444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.177896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.177904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.178441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.178470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.178916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.178924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.179501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.179531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.179989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.179998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.180493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.180525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.180958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.180969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.181191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.181209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.181666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.181674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.182118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.182127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.182710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.182719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.183118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.183128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.183474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.183506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.183968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.183980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.184539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.184570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.899 [2024-07-25 07:36:47.185070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.899 [2024-07-25 07:36:47.185082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.899 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.185630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.185662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.186113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.186123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.186653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.186685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.187153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.187163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.187685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.187718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.188227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.188249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.188689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.188698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.189156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.189166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.189558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.189568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.189996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.190005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.190498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.190531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.191030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.191041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.191586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.191619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.192107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.192118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.192552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.192560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.193020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.193029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.193564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.193595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.194044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.194054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.194604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.194636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.195106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.195116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.195604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.195639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.196136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.196147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.196573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.196582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.197042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.197051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.197587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.197618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.198070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.198082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.198606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.198638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.198855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.198867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.199323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.199332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.199779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.199788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.200002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.200012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.200442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.200451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.200804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.200813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.201300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.201308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.201638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.201646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.202106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.202114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.202613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.900 [2024-07-25 07:36:47.202622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.900 qpair failed and we were unable to recover it. 00:30:39.900 [2024-07-25 07:36:47.202812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.202822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.203219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.203228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.203702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.203710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.204194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.204641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.204649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.205066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.205075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.205660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.205692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.206136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.206147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.206597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.206606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.207035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.207045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.207607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.207638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.208081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.208090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.208636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.208667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.209085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.209095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.209555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.209564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.210002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.210011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.210541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.210572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.211037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.211047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.211605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.211638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.212088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.212099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.212561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.212571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.213028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.213036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.213598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.213630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.214097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.214113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.214644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.214675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.215100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.215112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.215565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.215574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.216019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.216028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.901 [2024-07-25 07:36:47.216560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.901 [2024-07-25 07:36:47.216592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.901 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.217055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.217065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.217603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.217634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.218089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.218099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.218636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.218667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.219133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.219143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.219582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.219592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.220027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.220036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.220586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.220618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.221083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.221092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.221553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.221562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.222005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.222014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.222476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.222507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.222981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.222991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.223535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.223567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.224015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.224025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.224559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.224591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.225063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.225074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.225634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.225665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.225993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.226003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.226530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.226561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.227025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.227035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.227579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.227611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.227959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.227970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.228517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.228548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.229000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.229010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.229451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.229482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.229932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.229943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.230484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.230516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.230944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.230954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.231515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.231547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.232000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.232010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.232553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.232584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.233029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.233039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.233577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.233609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.234059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.234073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.902 qpair failed and we were unable to recover it. 00:30:39.902 [2024-07-25 07:36:47.234602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.902 [2024-07-25 07:36:47.234633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.235092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.235103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.235642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.235675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.236129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.236138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.236591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.236600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.237062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.237071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.237660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.237692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.238148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.238159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.238694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.238725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.238944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.238956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.239504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.239535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.239987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.239997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.240536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.240567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.241032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.241043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.241494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.241525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.241977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.241987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.242571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.242602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.243031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.243041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.243585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.243616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.244076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.244087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.244527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.244536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.245000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.245008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.245568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.245599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:39.903 [2024-07-25 07:36:47.246049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.903 [2024-07-25 07:36:47.246059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:39.903 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.246597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.246629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.246979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.246990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.247543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.247574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.248026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.248037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.248582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.248613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.249087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.249098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.249549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.249557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.250002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.250011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.250544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.250575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.251041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.173 [2024-07-25 07:36:47.251051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.173 qpair failed and we were unable to recover it. 00:30:40.173 [2024-07-25 07:36:47.251600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.251631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.252075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.252085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.252667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.252699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.253045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.253055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.253604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.253636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.254087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.254101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.254562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.254570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.254804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.254816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.255292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.255301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.255748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.255757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.256197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.256208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.256646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.256655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.256872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.256882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.257336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.257344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.257785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.257793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.258269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.258277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.258501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.258511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.258957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.258965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.259407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.259416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.259876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.259884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.260347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.260356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.260700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.260709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.261151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.261159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.261507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.261516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.261976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.261984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.262467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.262475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.262919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.262929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.263494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.263525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.263984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.263994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.264542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.264573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.265026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.265036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.265599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.265631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.266108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.266118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.266586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.174 [2024-07-25 07:36:47.266596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.174 qpair failed and we were unable to recover it. 00:30:40.174 [2024-07-25 07:36:47.267039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.267047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.267599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.267631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.268102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.268113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.268546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.268578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.269031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.269040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.269487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.269518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.269994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.270004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.270476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.270507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.270958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.270969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.271527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.271558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.271974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.271985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.272543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.272577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.272906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.272916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.273364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.273395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.273880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.273890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.274337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.274345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.274795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.274804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.275275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.275284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.275751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.275759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.276210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.276219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.276554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.276563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.277028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.277036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.277483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.277492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.277934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.277943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.278385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.278394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.278824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.278833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.279183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.279191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.279685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.279693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.280020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.280029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.280558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.280589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.281057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.281067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.281602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.281633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.282088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.282098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.282635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.282666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.283036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.283046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.283483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.283515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.283969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.283979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.175 qpair failed and we were unable to recover it. 00:30:40.175 [2024-07-25 07:36:47.284544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.175 [2024-07-25 07:36:47.284576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.285038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.285048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.285591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.285623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.286070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.286638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.286670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.287136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.287146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.287674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.287706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.288158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.288170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.288676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.288708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.289173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.289182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.289517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.289548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.290008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.290019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.290592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.290623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.291090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.291100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.291322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.291337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.291794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.291803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.292261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.292269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.292585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.292595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.293033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.293041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.293488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.293496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.293970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.293979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.294549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.294580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.295110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.295121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.295556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.295565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.295868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.295878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.296343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.296352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.296797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.296806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.297245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.297254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.297764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.297773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.298213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.298222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.298629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.298637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.299064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.299072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.299617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.299649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.300121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.300132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.300714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.300746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.301198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.301212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.301753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.301784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.176 [2024-07-25 07:36:47.302222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.176 [2024-07-25 07:36:47.302243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.176 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.302702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.302711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.303155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.303166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.303627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.303635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.304093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.304103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.304558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.304566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.305061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.305070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.305625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.305656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.306122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.306132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.306663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.306694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.307150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.307160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.307482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.307512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.307756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.307768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.308067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.308076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.308521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.308530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.308997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.309005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.309575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.309607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.309830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.309848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.310310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.310320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.310553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.310565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.311000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.311009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.311227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.311237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.311675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.311683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.311923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.311930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.312405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.312413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.312850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.312858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.313295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.313304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.313681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.313689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.314137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.314145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.314584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.314593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.314937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.314945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.315397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.315406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.315870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.177 [2024-07-25 07:36:47.315879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.177 qpair failed and we were unable to recover it. 00:30:40.177 [2024-07-25 07:36:47.316308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.316318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.316773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.316782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.317242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.317251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.317682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.317692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.318135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.318143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.318587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.318596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.319053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.319061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.319584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.319616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.320070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.320080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.320603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.320634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.321098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.321108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.321549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.321559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.322003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.322012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.322547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.322579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.323043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.323053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.323611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.323642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.324108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.324118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.324641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.324673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.325133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.325142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.325577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.325586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.326025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.326034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.326569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.326600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.327065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.327074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.327620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.327650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.178 [2024-07-25 07:36:47.328103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.178 [2024-07-25 07:36:47.328117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.178 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.328640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.328671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.329136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.329145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.329689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.329721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.330170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.330180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.330629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.330659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.331123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.331134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.331584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.331593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.331936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.331944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.332497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.332528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.332997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.333007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.333566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.333597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.334044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.334055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.334406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.334437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.334878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.334888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.335446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.335477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.335928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.335939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.336492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.336523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.336995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.337004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.337555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.337586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.338042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.338052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.338463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.338494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.338960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.338970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.339499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.339530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.339970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.339980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.340535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.340565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.340784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.340796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.341279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.341290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.341749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.341758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.342208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.342217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.342690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.342699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.342923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.342931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.343376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.343384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.343825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.343833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.344295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.344303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.344737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.344745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.179 qpair failed and we were unable to recover it. 00:30:40.179 [2024-07-25 07:36:47.345086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.179 [2024-07-25 07:36:47.345096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.345565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.345574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.346030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.346039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.346596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.346626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.347077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.347090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.347551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.347560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.348027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.348036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.348575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.348606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.349058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.349068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.349604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.349636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.350099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.350109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.350638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.350669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.351118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.351129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.351581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.351589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.352061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.352071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.352594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.352625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.352971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.352982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.353519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.353549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.354020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.354030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.354596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.354626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.355075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.355085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.355618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.355627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.356044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.356052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.356576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.356607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.357054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.357064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.357605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.357636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.358099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.358110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.358643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.358674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.359122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.359132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.359464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.359474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.359906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.359916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.360390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.360421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.360871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.360881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.361329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.361338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.361559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.361571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.362005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.180 [2024-07-25 07:36:47.362013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.180 qpair failed and we were unable to recover it. 00:30:40.180 [2024-07-25 07:36:47.362440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.362448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.362887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.362895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.363355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.363363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.363862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.363870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.364308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.364316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.364766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.364774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.364991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.365002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.365468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.365477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.365934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.365945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.366473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.366504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.366853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.366864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.367312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.367321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.367756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.367765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.368206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.368214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.368622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.368631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.369097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.369105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.369604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.369612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.370051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.370060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.370510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.370541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.371002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.371012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.371547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.371578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.372028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.372038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.372574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.372605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.373067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.373077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.373603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.373634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.374085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.374096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.374594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.374626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.375097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.375108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.375566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.375575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.376022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.376031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.376588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.376619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.377008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.377018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.377555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.377586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.181 [2024-07-25 07:36:47.378038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.181 [2024-07-25 07:36:47.378048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.181 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.378606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.378637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.379073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.379084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.379598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.379629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.380077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.380087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.380647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.380656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.381129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.381137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.381568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.381578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.382015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.382024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.382583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.382615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.383087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.383096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.383526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.383535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.383971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.383980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.384529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.384560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.385032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.385042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.385575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.385609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.385955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.385964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.386519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.386550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.387019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.387029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.387548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.387579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.388034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.388044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.388510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.388541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.389004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.389014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.389595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.389626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.390122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.390133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.390652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.390682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.391153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.391164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.391696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.391726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.392169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.392180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.392731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.392762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.393111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.393122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.393571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.393581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.394025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.394035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.394601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.394631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.395105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.395115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.182 qpair failed and we were unable to recover it. 00:30:40.182 [2024-07-25 07:36:47.395736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.182 [2024-07-25 07:36:47.395766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.396222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.396241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.396699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.396707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.397172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.397180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.397487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.397497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.397714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.397725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.397927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.397937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.398329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.398341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.398778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.398787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.399224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.399233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.399709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.399717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.400184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.400192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.400622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.400630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.400842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.400853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.401370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.401378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.401808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.401816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.402254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.402262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.402699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.402707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.402929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.402937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.403348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.403357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.403706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.403714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.404251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.404260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.404470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.404480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.404921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.404929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.405367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.405375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.405815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.405823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.406284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.406293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.406788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.406796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.407226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.407234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.407674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.183 [2024-07-25 07:36:47.407684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.183 qpair failed and we were unable to recover it. 00:30:40.183 [2024-07-25 07:36:47.408145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.408154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.408582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.408591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.408805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.408816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.409262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.409270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.409746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.409755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.410212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.410220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.410658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.410666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.411102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.411110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.411554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.411563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.411987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.411995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.412299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.412309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.412743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.412752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.413091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.413099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.413546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.413555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.413984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.413992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.414498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.414506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.414928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.414936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.415483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.415516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.415964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.415974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.416487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.416517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.416946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.416957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.417517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.417548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.417998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.418010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.418543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.418573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.419054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.419064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.419626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.419656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.420102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.420112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.420537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.420567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.421039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.421048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.421596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.421626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.422057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.422067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.422598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.422628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.423086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.423096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.423638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.423668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.424123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.424133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.424612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.424621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.184 [2024-07-25 07:36:47.425087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.184 [2024-07-25 07:36:47.425095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.184 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.425440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.425449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.425891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.425899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.426432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.426462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.426788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.426798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.427224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.427233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.427682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.427691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.427995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.428005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.428466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.428474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.428932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.428940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.429358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.429366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.429805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.429813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.430270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.430279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.430587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.430596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.431040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.431048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.431486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.431494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.431949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.431957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.432472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.432502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.432953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.432964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.433501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.433530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.433999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.434009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.434562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.434595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.435087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.435096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.435561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.435570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.436032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.436041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.436569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.436598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.437044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.437056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.437597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.437627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.438089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.438099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.438628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.438657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.439107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.439116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.439573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.439581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.440047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.440056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.440584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.440614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.441061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.441071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.441636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.185 [2024-07-25 07:36:47.441665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.185 qpair failed and we were unable to recover it. 00:30:40.185 [2024-07-25 07:36:47.442130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.442140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.442663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.442693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.443140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.443151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.443671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.443700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.444171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.444182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.444742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.444771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.445407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.445437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.445864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.445874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.446438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.446467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.446931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.446942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.447478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.447508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.447954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.447965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.448568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.448598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.449064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.449074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.449606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.449636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.450072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.450082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.450640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.450670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.451132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.451142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.451672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.451701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.452150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.452159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.452684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.452714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.453173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.453184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.453713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.453742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.454191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.454215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.454728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.454758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.455225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.455248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.455708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.455718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.456152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.456161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.456619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.456627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.456849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.456861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.457313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.457321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.457758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.457767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.458208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.186 [2024-07-25 07:36:47.458216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.186 qpair failed and we were unable to recover it. 00:30:40.186 [2024-07-25 07:36:47.458660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.458668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.458883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.458893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.459341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.459350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.459560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.459569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.459970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.459978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.460464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.460473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.460905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.460914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.461335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.461343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.461840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.461848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.462274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.462282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.462745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.462753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.463114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.463123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.463557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.463565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.463906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.463914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.464122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.464132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.464584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.464592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.465049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.465057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.465487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.465496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.465933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.465941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.466152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.466161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.466596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.466604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.467040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.467048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.467618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.467647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.468112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.468122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.468582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.468591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.469098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.469106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.469545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.469553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.470009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.470017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.470452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.470482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.470932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.470941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.471524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.471554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.472017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.472027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.472600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.472633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.473080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.473090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.473526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.473535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.474000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.474008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.187 [2024-07-25 07:36:47.474524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.187 [2024-07-25 07:36:47.474553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.187 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.475001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.475011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.475537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.475567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.476028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.476038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.476600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.476630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.477082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.477092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.477537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.477568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.478032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.478042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.478553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.478583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.479033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.479043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.479583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.479612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.480075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.480086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.480551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.480559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.480999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.481007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.481542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.482035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.482045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.482601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.482630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.483081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.483092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.483552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.483561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.483986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.483995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.484553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.484582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.485077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.485087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.485415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.485425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.485862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.485870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.486327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.486335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.486776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.486784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.487220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.487229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.487626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.487635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.488094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.488102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.488342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.488350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.188 [2024-07-25 07:36:47.488798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.188 [2024-07-25 07:36:47.488805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.188 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.489233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.489241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.489671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.489680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.490118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.490126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.490564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.490574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.491029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.491037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.491496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.491507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.491986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.491995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.492518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.492546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.493010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.493020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.493545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.493574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.494023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.494033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.494564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.494593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.495063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.495073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.495616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.495645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.496094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.496105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.496636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.496666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.497126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.497136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.497627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.497636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.498074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.498082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.498552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.498560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.499025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.499033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.499579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.499608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.500055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.500065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.500597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.500626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.500974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.500985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.501547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.501576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.502023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.502033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.502558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.502587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.502996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.503005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.503562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.503590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.504036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.504046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.504581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.504610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.505045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.505055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.505584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.505612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.506042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.506052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.506589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.506618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.189 [2024-07-25 07:36:47.507096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.189 [2024-07-25 07:36:47.507107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.189 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.507635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.507664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.508117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.508128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.508619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.508628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.509124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.509134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.509662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.509691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.509890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.509901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.510089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.510100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.510530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.510540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.511007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.511019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.511246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.511256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.511710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.511718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.512182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.512191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.512736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.512765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.513209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.513219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.513721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.513750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.513963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.513975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.514474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.514503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.514956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.514966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.515519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.515547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.516009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.516018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.516580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.516609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.516826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.516838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.517175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.517185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.517689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.517697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.518119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.518128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.518568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.518577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.519013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.519022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.519586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.519616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.520024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.520033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.520250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.520263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.520697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.520706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.521196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.521208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.521622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.521630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.522060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.522068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.522652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.190 [2024-07-25 07:36:47.522680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.190 qpair failed and we were unable to recover it. 00:30:40.190 [2024-07-25 07:36:47.523109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.523119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.523644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.523673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.524019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.524029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.524596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.524625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.525082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.525092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.525557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.525566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.525869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.525878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.526453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.526482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.526944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.526953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.527486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.527521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.527967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.527978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.528510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.528539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.528998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.529007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.529550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.529583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.530030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.530040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.530576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.530606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.531065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.531074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.531494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.531523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.532012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.532022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.191 [2024-07-25 07:36:47.532559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.191 [2024-07-25 07:36:47.532588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.191 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.532998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.533010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.533546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.533574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.533896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.533906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.534449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.534477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.534833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.534843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.535292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.535301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.535744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.535753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.536197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.536210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.536623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.536631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.537062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.537071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.537591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.537620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.538067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.538078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.538602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.538632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.539091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.539102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.539645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.539674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.540122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.540131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.540592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.540601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.541041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.541049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.541583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.541612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.542059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.542068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.542601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.542629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.543094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.543104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.543641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.543670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.544120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.544130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.544569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.544578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.545046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.545054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.545478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.545507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.545953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.545964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.546524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.461 [2024-07-25 07:36:47.546552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.461 qpair failed and we were unable to recover it. 00:30:40.461 [2024-07-25 07:36:47.547014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.547024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.547571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.547600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.548038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.548048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.548603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.548632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.549094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.549107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.549645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.549674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.550118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.550128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.550603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.550612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.551074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.551083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.551606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.551636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.552085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.552094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.552501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.552530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.552990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.553000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.553534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.553563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.554002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.554013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.554442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.554471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.554708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.554717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.555162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.555170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.555605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.555614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.556077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.556086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.556577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.556586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.557024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.557032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.557564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.557592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.558043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.558052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.558607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.558635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.559082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.559091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.559591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.559600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.559809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.559820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.560276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.560285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.560715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.560723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.561158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.561167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.561493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.561502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.561727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.561738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.562232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.562241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.562676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.562684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.563124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.563132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.563592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.563601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.462 [2024-07-25 07:36:47.564043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.462 [2024-07-25 07:36:47.564051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.462 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.564487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.564496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.564952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.564960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.565547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.565576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.565789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.565800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.566251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.566260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.566731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.566740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.567194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.567221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.567652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.567660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.567880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.567891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.568332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.568340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.568799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.568808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.569243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.569251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.569687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.569695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.570155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.570163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.570629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.570637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.571071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.571079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.571600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.571630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.572090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.572100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.572452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.572462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.572901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.572909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.573439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.573468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.573930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.573939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.574526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.574555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.575005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.575015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.575511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.575539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.575955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.575966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.576488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.576517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.577041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.577051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.577584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.577613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.578078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.578087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.578391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.578401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.578837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.578845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.579273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.579281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.579715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.579723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.580180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.580188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.580660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.580668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.581105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.463 [2024-07-25 07:36:47.581113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.463 qpair failed and we were unable to recover it. 00:30:40.463 [2024-07-25 07:36:47.581548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.581556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.582013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.582022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.582547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.582577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.583025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.583035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.583572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.583601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.584050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.584060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.584600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.584628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.585081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.585090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.585634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.585663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.586127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.586140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.586583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.586592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.587035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.587044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.587509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.587538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.587999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.588009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.588588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.588617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.588965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.588976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.589519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.589548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.590014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.590024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.590464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.590492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.590940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.590950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.591503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.591532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.591999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.592009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.592584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.592612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.593089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.593098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.593458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.593468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.593931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.593939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.594472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.594500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.594954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.594964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.595358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.595387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.595833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.595843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.596286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.596295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.596638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.596647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.597109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.597117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.597557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.597566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.598001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.598009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.598445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.598454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.598913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.464 [2024-07-25 07:36:47.598922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.464 qpair failed and we were unable to recover it. 00:30:40.464 [2024-07-25 07:36:47.599496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.599525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.599976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.599986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.600519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.600548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.601011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.601022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.601576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.601605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.602066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.602076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.602597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.602627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.603088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.603098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.603648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.603678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.604122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.604132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.604573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.604582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.605037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.605046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.605597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.605629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.606073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.606082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.606605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.606634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.607102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.607111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.607644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.607674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.608126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.608137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.608663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.608692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.609187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.609197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.609627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.609636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.610076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.610085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.610510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.610538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.610976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.610987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.611529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.611557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.612003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.612013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.612551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.612580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.465 [2024-07-25 07:36:47.613044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.465 [2024-07-25 07:36:47.613053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.465 qpair failed and we were unable to recover it. 00:30:40.466 [2024-07-25 07:36:47.613526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.613555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.613995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.614006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.614532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.614562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.615030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.615039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.615568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.615597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.616045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.616056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.616592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.616621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.617091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.617101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.617753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.617781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.617881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.617893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.618302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.618311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.618773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.618782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.619243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.619252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.619676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.619684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.620125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.620133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.620596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.620605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.621065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.621074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.621497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.621527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.621979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.621989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.622188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.622207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.622675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.622683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.623110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.623118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.623642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.623671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.623884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.623894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.624380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.624393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.624594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.624604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.625057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.625066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.625529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.625537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.625995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.626004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.626531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.626560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.627008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.627018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.627246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.627258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.627678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.627686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.628110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.628119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.628645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.628674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.629147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.629157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.629677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.467 [2024-07-25 07:36:47.629707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.467 qpair failed and we were unable to recover it. 00:30:40.467 [2024-07-25 07:36:47.630153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.630162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.630552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.630581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.631041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.631051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.631613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.631641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.632090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.632100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.632537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.632546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.633005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.633013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.633571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.633600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.634043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.634054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.634583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.634612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.635079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.635088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.635527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.635537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.636048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.636057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.636612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.636640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.637101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.637114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.637670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.637698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.638078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.638088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.638530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.638539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.639011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.639019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.639537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.639566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.640014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.640023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.640553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.640582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.641045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.641056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.641598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.641627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.642082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.642092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.642644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.642673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.643014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.643024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.643573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.643602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.644070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.644080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.644601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.644631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.645094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.645104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.645640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.645668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.646113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.646123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.646560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.646569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.647029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.647037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.647564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.647594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.648045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.648055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.648594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.468 [2024-07-25 07:36:47.648623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.468 qpair failed and we were unable to recover it. 00:30:40.468 [2024-07-25 07:36:47.648969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.648980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.649535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.649564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.650002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.650011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.650552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.650581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.651046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.651056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.651600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.651629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.652085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.652095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.652652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.652681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.653154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.653164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.653625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.653635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.654070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.654079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.654596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.654626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.654967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.654976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.655547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.655575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.656069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.656459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.656488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.656953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.656966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.657423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.657452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.657903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.657913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.658451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.658480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.658947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.658957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.659487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.659516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.659999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.660009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.660544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.660573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.661042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.661053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.661596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.661626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.662077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.662087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.662549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.662557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.662987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.662996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.663454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.663482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.663933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.663943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.664476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.664505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.664968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.664978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.665418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.665446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.665896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.665906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.666440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.666469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.666932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.666942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.469 [2024-07-25 07:36:47.667481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.469 [2024-07-25 07:36:47.667510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.469 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.667953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.667963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.668489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.668518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.668973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.668984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.669539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.669568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.669915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.669925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.670412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.670441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.670912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.670922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.671427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.671436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.671871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.671879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.672448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.672476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.672947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.672957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.673502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.673531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.673986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.673995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.674444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.674474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.674901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.675491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.675520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.675978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.675988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.676523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.676551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.676901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.676914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.677387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.677396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.677836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.677844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.678280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.678290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.678694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.678702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.678916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.678928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.679357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.679366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.679768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.679777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.679988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.679998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.680441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.680449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.680953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.680961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.681392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.681400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.681870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.681878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.682349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.682357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.682843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.682851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.683283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.683292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.683721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.683729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.683940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.683949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.684348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.684356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.470 qpair failed and we were unable to recover it. 00:30:40.470 [2024-07-25 07:36:47.684674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.470 [2024-07-25 07:36:47.684682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.685210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.685218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.685632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.685640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.686076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.686084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.686434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.686442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.686900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.686908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.687370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.687378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.687558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.687566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.688014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.688023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.688458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.688467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.688886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.688894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.689333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.689341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.689683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.689691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.690153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.690162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.690609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.690618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.691059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.691067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.691598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.691626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.692128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.692138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.692588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.692596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.693119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.693127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.693628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.693657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.693993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.694006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.694548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.694577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.695025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.695035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.695575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.695603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.696032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.696042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.696499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.696527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.696977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.696987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.697546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.697576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.698006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.698016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.698556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.698584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.699035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.471 [2024-07-25 07:36:47.699045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.471 qpair failed and we were unable to recover it. 00:30:40.471 [2024-07-25 07:36:47.699604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.699633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.700092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.700102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.700645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.700674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.701125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.701588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.701597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.702045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.702054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.702514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.702542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.703014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.703024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.703562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.703591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.704022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.704031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.704500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.704529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.704978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.704989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.705556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.705585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.705931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.705940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.706471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.706500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.706950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.706959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.707534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.707564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.708028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.708037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.708593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.708623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.709062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.709072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.709608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.709637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.710107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.710117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.710645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.710673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.711163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.711173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.711701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.711730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.712074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.712085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.712520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.712528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.712971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.712979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.713550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.713579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.714005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.714019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.714621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.714650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.715110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.715120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.715556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.715565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.716025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.716033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.716575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.716603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.717051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.717061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.717502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.717531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.472 [2024-07-25 07:36:47.717998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.472 [2024-07-25 07:36:47.718007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.472 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.718493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.718522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.718973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.718983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.719534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.719563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.719897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.719908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.720505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.720533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.721006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.721016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.721581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.721610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.722077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.722088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.722525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.722534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.722974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.722983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.723419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.723448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.723915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.723925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.724466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.724494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.725050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.725060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.725462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.725491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.725837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.725847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.726440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.726469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.726903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.726913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.727350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.727359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.727801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.727810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.728282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.728290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.728734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.728742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.729178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.729187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.729631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.729640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.730100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.730109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.730571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.730580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.730807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.730819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.731262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.731271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.731726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.731734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.732171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.732179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.732627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.732636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.733098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.733109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.733570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.733579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.734019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.734028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.734506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.734535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.734996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.735006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.735548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.735576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.473 [2024-07-25 07:36:47.735784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.473 [2024-07-25 07:36:47.735796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.473 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.736198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.736212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.736560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.736569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.737005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.737012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.737562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.737591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.738042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.738052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.738588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.738617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.739046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.739058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.739606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.739635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.740087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.740096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.740453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.740481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.740960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.740970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.741507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.741536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.742060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.742069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.742604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.742633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.743087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.743097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.743564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.743573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.744013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.744022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.744484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.744513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.744861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.744871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.745421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.745450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.745942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.745952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.746537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.746566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.746917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.746927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.747371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.747380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.747833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.747842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.748308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.748316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.748780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.748788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.749232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.749240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.749699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.749706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.750137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.750145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.750589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.750597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.751037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.751045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.751520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.751549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.752021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.752034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.752573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.752602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.753049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.753059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.474 [2024-07-25 07:36:47.753595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.474 [2024-07-25 07:36:47.753625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.474 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.754096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.754105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.754678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.754707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.755156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.755167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.755621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.755630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.756097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.756105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.756612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.756641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.757089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.757100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.757567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.757576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.758041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.758050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.758599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.758628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.759142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.759152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.759734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.759762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.760210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.760220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.760783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.760811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.761409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.761437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.761896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.761906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.762448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.762477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.762821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.762832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.763270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.763279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.763722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.763730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.764189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.764197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.764646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.764654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.765099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.765107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.765534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.765542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.766011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.766020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.766553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.766582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.767039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.767049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.767601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.767629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.768071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.768081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.768621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.768650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.769091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.769101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.769689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.769718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.770166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.770176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.770721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.770750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.771099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.771110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.771564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.771573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.475 [2024-07-25 07:36:47.772047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.475 [2024-07-25 07:36:47.772058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.475 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.772618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.772647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.773143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.773153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.773706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.773734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.774192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.774208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.774730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.774759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.775218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.775230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.775674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.775683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.776158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.776166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.776608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.776617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.777059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.777067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.777601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.777630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.778056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.778065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.778609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.778637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.779082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.779092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.779556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.779585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.780040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.780595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.780623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.781074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.781083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.781531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.781561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.782023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.782033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.782601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.782630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.783162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.783172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.783817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.783846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.784395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.784423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.784871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.784881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.785438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.785467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.785947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.785957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.786595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.786624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.786885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.786896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.787376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.787385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.787730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.787739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.788204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.788212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.476 qpair failed and we were unable to recover it. 00:30:40.476 [2024-07-25 07:36:47.788664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.476 [2024-07-25 07:36:47.788672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.789112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.789120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.789547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.789556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.790017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.790025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.790238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.790248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.790453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.790464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.790796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.790804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.791442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.791475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.791912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.791921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.792237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.792246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.792695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.792703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.793142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.793150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.793627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.793636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.793857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.793868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.794303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.794312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.794801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.794809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.795338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.795347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.795556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.795566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.796025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.796033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.796574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.796582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.797105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.797113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.797448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.797457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.797898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.797905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.798125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.798134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.798553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.798562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.798996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.799004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.799433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.799442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.799936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.799943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.800490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.800519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.800978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.800987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.801521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.801550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.801996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.802005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.802554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.802584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.803041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.803052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.803627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.803655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.804093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.804102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.804631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.804659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.805092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.477 [2024-07-25 07:36:47.805102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.477 qpair failed and we were unable to recover it. 00:30:40.477 [2024-07-25 07:36:47.805556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.805565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.805995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.806004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.806584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.806613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.807066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.807075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.807510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.807519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.807950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.807958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.808398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.808406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.808873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.808881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.809307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.809316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.809725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.809736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.810174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.810182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.810620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.810629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.810967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.810976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.811432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.811440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.811862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.811870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.812259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.812267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.812729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.812738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.813086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.813095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.813455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.813463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.813904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.813912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.814352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.814360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.814705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.814715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.815166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.815175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.815648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.815656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.816048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.816056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.816413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.816442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.816790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.816801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.817288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.817297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.817739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.817747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.818182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.818647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.818656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.819113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.819121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.819571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.819580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.478 [2024-07-25 07:36:47.820047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.478 [2024-07-25 07:36:47.820055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.478 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.820599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.820630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.821077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.821088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.821544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.821553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.821994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.822003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.822563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.822592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.823040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.823050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.823603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.823632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.824086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.824095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.824654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.824683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.825136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.825147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.825601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.825611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.826047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.826056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.826552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.826581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.827056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.827065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.827532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.827562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.827953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.827968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.828584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.828612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.829090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.829100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.829310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.829321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.829835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.829843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.830285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.830293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.830735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.830743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.831210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.831219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.831486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.831494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.831957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.831965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.832310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.832319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.832785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.832794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.833254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.833262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.833717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.833725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.834071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.834080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.834529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.834538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.834996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.835004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.835543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.835572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.836066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.836076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.778 qpair failed and we were unable to recover it. 00:30:40.778 [2024-07-25 07:36:47.836633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.778 [2024-07-25 07:36:47.836662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.837098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.837109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.837680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.837709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.838175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.838185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.838728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.838758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.839415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.839444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.839906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.839915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.840433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.840462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.840909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.840920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.841442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.841471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.841913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.841924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.842435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.842464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.842920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.842930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.843271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.843280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.843713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.843722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.844162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.844171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.844617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.844626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.845083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.845092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.845444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.845453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.845902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.845910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.846352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.846360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.846770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.846781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.846995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.847003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.847415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.847424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.847767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.847776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.848208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.848216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.848648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.848656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.849096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.849106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.849438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.849448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.849874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.849883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.850269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.850278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.850721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.850730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.851152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.851160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.851471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.851480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.851784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.851793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.852132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.779 [2024-07-25 07:36:47.852141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.779 qpair failed and we were unable to recover it. 00:30:40.779 [2024-07-25 07:36:47.852609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.852618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.852777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.852785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.853202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.853212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.853601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.853609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.854042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.854050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.854558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.854587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.855016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.855026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.855573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.855602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.856028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.856038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.856532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.856561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.856906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.856916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.857399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.857429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.857884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.857894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.858363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.858372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.858816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.858825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.859044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.859056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.859376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.859385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.859599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.859608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.860041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.860050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.860494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.860502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.860937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.860945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.861415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.861445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.861815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.861825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.862395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.862404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.862838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.862847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.863296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.863309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.863743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.863752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.864109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.864118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.864505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.864513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.864878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.864887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.865322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.865330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.865770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.865778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.866214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.866223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.866550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.866559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.866989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.866997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.867572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.867601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.868052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.868061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.780 qpair failed and we were unable to recover it. 00:30:40.780 [2024-07-25 07:36:47.868452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.780 [2024-07-25 07:36:47.868480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.868946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.868956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.869416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.869445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.869938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.869948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.870499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.870527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.870984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.870994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.871427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.871456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.871907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.871916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.872485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.872514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.872881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.872891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.873327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.873335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.873676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.873684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.874133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.874140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.874473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.874482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.874903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.874911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.875353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.875362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.875786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.875794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.876225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.876233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.876676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.876684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.877118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.877125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.877575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.877584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.878017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.878025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.878460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.878469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.878906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.878913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.879488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.879517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.879733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.879744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.880197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.880211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.880557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.880565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.881002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.881010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.881559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.881588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.881929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.881939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.882269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.882278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.882390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.882401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.882877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.882885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.883336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.883345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.883788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.883795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.884251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.884260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.884682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.781 [2024-07-25 07:36:47.884690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.781 qpair failed and we were unable to recover it. 00:30:40.781 [2024-07-25 07:36:47.885172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.885180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.885637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.885645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.886077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.886085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.886558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.886567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.887012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.887021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.887475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.887484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.887948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.887956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.888427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.888456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.888926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.888936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.889457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.889487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.889919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.889929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.890442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.890471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.890933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.890942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.891495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.891524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.892026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.892036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.892618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.892648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.893100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.893110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.893538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.893550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.893978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.893986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.894446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.894474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.894938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.894948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.895498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.895527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.895970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.895980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.896488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.896517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.896852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.896863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.897305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.897313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.897657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.897665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.898008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.898017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.898463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.898472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.898916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.898924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.782 [2024-07-25 07:36:47.899487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.782 [2024-07-25 07:36:47.899516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.782 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.899969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.899978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.900509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.900538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.900986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.900996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.901455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.901484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.901914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.901924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.902550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.902580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.903029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.903037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.903567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.903594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.904053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.904061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.904613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.904641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.905078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.905087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.905472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.905479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.905923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.905931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.906451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.906479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.906967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.906976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.907543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.907570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.907924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.907932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.908513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.908542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.908974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.908982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.909497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.909525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.909965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.909974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.910506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.910534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.910973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.910981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.911540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.911568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.912016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.912025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.912498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.912526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.912743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.912757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.913209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.913218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.913663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.913670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.913876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.913885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.914341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.914348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.914781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.914789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.915243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.915250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.915460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.915469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.915970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.915976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.916405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.916412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.916870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.783 [2024-07-25 07:36:47.916876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.783 qpair failed and we were unable to recover it. 00:30:40.783 [2024-07-25 07:36:47.917332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.917339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.917782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.917789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.918259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.918268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.918708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.918715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.919133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.919140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.919554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.919561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.919982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.919989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.920442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.920449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.920866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.920872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.921288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.921295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.921719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.921726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.922151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.922157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.922509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.922517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.922941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.922948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.923376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.923384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.923812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.923818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.924266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.924274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.924718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.924725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.925150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.925158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.925586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.925593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.926007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.926014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.926561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.926588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.927026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.927036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.927572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.927600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.928032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.928040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.928567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.928595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.929036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.929045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.929624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.929652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.930087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.930095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.930650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.930681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.931114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.931123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.931567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.931575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.931926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.931933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.932597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.932625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.933112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.933122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.933560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.933569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.934092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.934099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.784 [2024-07-25 07:36:47.934451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.784 [2024-07-25 07:36:47.934458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.784 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.934798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.934805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.935235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.935243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.935707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.935714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.936172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.936180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.936643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.936650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.937068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.937076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.937681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.937709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.938141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.938150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.938681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.938710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.939184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.939193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.939750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.939778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.940415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.940443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.940884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.940893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.941457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.941486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.941977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.941986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.942422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.942450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.942890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.942898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.943505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.943539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.943959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.943969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.944502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.944530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.944981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.944991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.945535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.945563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.945995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.946003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.946476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.946504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.946944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.946953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.947515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.947543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.947981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.947990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.948513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.948541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.948979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.948988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.949541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.949570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.950023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.950032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.950563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.950594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.951030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.951038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.951584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.951612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.952055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.952064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.952579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.952606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.953100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.953109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.785 qpair failed and we were unable to recover it. 00:30:40.785 [2024-07-25 07:36:47.953639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.785 [2024-07-25 07:36:47.953667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.954100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.954110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.954567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.954574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.955001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.955008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.955529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.955558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.955995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.956003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.956521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.956548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.956985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.956993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.957525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.957553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.957987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.957996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.958553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.958580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.959016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.959025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.959541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.959568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.960005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.960014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.960460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.960469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.960902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.960909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.961455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.961483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.961925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.961934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.962448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.962476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.962812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.962822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.963288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.963296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.963738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.963745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.964066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.964074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.964504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.964512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.964933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.964940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.965356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.965363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.965825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.965832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.966259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.966266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.966701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.966708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.967178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.967185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.967733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.967741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.968158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.968166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.968598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.968607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.968813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.968824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.969277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.969289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.969732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.969738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.970210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.970217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.970638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.786 [2024-07-25 07:36:47.970645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.786 qpair failed and we were unable to recover it. 00:30:40.786 [2024-07-25 07:36:47.971143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.971150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.971500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.971507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.971965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.971971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.972389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.972396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.972836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.972843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.973194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.973205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.973525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.973532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.973859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.973866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.974244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.974251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.974651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.974659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.975108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.975114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.975554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.975561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.975962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.975969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.976399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.976405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.976881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.976889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.977351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.977358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.977861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.977868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.978387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.978415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.978904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.978913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.979234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.979242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.979704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.979712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.979997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.980010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.980461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.980468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.980922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.980930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.981459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.981487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.981922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.981931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.982371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.982380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.982839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.982846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.983205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.983213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.983659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.983666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.983966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.983974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.984545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.984573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.787 [2024-07-25 07:36:47.985024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.787 [2024-07-25 07:36:47.985033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.787 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.985566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.985593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.986024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.986033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.986431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.986459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.986895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.986906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.987424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.987451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.987884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.987893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.988472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.988500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.988933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.988943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.989478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.989506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.989945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.989954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.990515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.990543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.990989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.990997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.991525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.991552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.991989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.991999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.992561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.992588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.993018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.993027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.993565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.993593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.994051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.994060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.994612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.994639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.995073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.995081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.995605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.995632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.996067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.996076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.996637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.996665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.997099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.997109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.997643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.997671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.998118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.998126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.998649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.998677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.999112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.999121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.999561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.999569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:47.999905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:47.999911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.000413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:48.000441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.000943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:48.000952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.001490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:48.001518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.001952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:48.001961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.002507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:48.002534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.002874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:48.002883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.003327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.788 [2024-07-25 07:36:48.003334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.788 qpair failed and we were unable to recover it. 00:30:40.788 [2024-07-25 07:36:48.003790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.003796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.004215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.004222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.004659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.004666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.005084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.005091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.005511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.005518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.005936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.005944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.006299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.006310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.006627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.006635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.007078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.007085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.007504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.007511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.007974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.007980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.008403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.008410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.008840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.008847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.009149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.009158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.009496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.009503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.009946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.009953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.010391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.010398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.010906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.010913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.011434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.011462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.011920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.011929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.012467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.012495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.012834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.012842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.013286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.013293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.013766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.013773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.014266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.014272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.014710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.014717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.014988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.014996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.015436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.015444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.015908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.015914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.016236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.016244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.016693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.016700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.017115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.017123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.017560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.017567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.017986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.017993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.018342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.018350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.018788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.018794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.019213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.019221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.789 qpair failed and we were unable to recover it. 00:30:40.789 [2024-07-25 07:36:48.019649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.789 [2024-07-25 07:36:48.019656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.020075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.020081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.020500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.020507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.020926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.020932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.021442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.021470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.021900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.021910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.022357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.022364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.022805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.022813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.023275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.023283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.023637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.023647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.024096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.024103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.024532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.024539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.024969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.024976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.025412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.025419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.025873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.025881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.026318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.026325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.026791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.026799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.027232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.027239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.027680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.027687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.028102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.028108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.028335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.028342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.028783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.028790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.029207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.029214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.029657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.029664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.030127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.030134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.030572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.030579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.031042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.031048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.031553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.031581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.032046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.032056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.032596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.032623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.033062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.033070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.033512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.033540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.033982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.033991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.034541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.034569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.035013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.035021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.035502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.035529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.036045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.036054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.036573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.790 [2024-07-25 07:36:48.036601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.790 qpair failed and we were unable to recover it. 00:30:40.790 [2024-07-25 07:36:48.037032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.037041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.037576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.037603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.038037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.038046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.038564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.038592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.039043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.039051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.039581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.039608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.040045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.040054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.040464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.040490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.040927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.040936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.041450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.041478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.041896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.041906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.042449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.042480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.042907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.042915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.043120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.043130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.043569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.043577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.043992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.043999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.044443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.044450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.044866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.044873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.045446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.045474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.045905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.045914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.046310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.046318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.046675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.046683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.047021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.047028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.047452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.047459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.047911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.047917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.048468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.048496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.048958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.048966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.049494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.049522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.049858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.049866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.050352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.050360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.050785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.050792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.051113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.051119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.051419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.051427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.051852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.051859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.052197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.791 [2024-07-25 07:36:48.052210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.791 qpair failed and we were unable to recover it. 00:30:40.791 [2024-07-25 07:36:48.052634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.052641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.052976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.052983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.053509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.053536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.053878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.053887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.054235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.054243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.054686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.054693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.055113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.055119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.055451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.055459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.055891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.055898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.056317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.056324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.056758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.056765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.057113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.057120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.057549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.057556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.057956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.057964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.058398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.058406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.058866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.058872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.059369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.059379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.059796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.059803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.060263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.060270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.060694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.060700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.061111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.061118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.061548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.061556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.061893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.061900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.062316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.062324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.062764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.062771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.063209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.063216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.063633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.063639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.064061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.064068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.064561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.064568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.064888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.064895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.065343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.065351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.065787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.065794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.066212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.066219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.066667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.066674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.067090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.067096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.792 qpair failed and we were unable to recover it. 00:30:40.792 [2024-07-25 07:36:48.067594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.792 [2024-07-25 07:36:48.067601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.067943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.067951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.068289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.068296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.068705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.068712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.069041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.069049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.069393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.069399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.069859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.069866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.070315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.070322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.070752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.070759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.071091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.071099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.071516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.071523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.072018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.072024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.072447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.072455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.072878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.072885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.073450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.073478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.073919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.073928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.074477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.074505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.074848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.074857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.075308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.075316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.075533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.075543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.075996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.076003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.076215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.076229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.076655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.076663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.077085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.077093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.077557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.077564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.077998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.078005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.078444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.078451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.078909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.078916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.079333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.079340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.079770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.079777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.080199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.080210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.080644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.080650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.081076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.081083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.081603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.081631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.082081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.082090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.082651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.082680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.083143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.083153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.083584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.793 [2024-07-25 07:36:48.083591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.793 qpair failed and we were unable to recover it. 00:30:40.793 [2024-07-25 07:36:48.084089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.084096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.084519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.084526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.084951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.084958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.085507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.085535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.085975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.085984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.086437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.086464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.086924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.086932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.087451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.087479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.087914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.087923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.088464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.088492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.088956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.088964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.089492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.089520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.090030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.090038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.090538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.090565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.091028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.091037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.091552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.091580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.092018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.092027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.092516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.092544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.093010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.093020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.093547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.093575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.094014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.094022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.094257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.094264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.094703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.094710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.094922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.094937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.095379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.095387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.095817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.095824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.096245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.096252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.096743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.096750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.097171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.097178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.097707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.097714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.098148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.098155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.098584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.098590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.099023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.099031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.099527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.099556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.099791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.099800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.100234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.100242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.100691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.100698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.794 [2024-07-25 07:36:48.101143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.794 [2024-07-25 07:36:48.101150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.794 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.101586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.101594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.102021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.102027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.102540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.102567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.103003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.103011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.103527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.103555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.104000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.104009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.104522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.104549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.104989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.104997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.105397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.105424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.105777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.105786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.106241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.106248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.106701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.106708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.107131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.107138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.107575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.107582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:40.795 [2024-07-25 07:36:48.108008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.795 [2024-07-25 07:36:48.108016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:40.795 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.108531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.108560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.108994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.109003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.109548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.109576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.110008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.110018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.110535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.110564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.110990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.110999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.111629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.111656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.112158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.112166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.112694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.112721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.113060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.113069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.113605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.113633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.114071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.114079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.114606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.114634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.115106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.115116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.115641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.115668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.116090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.116099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.116556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.116563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.116987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.116995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.117548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.117575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.118011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.118019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.118536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.118564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.119000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.119009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.119535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.119563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.119996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.120005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.120518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.120546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.121006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.121014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.121529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.121556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.067 qpair failed and we were unable to recover it. 00:30:41.067 [2024-07-25 07:36:48.122074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.067 [2024-07-25 07:36:48.122082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.122604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.122632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.123103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.123111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.123649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.123677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.124132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.124141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.124675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.124703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.124944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.124953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.125441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.125448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.125744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.125753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.126195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.126205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.126660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.126670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.127099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.127106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.127513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.127520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.127957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.127964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.128489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.128516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.128952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.128960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.129492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.129520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.129952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.129960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.130479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.130507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.130960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.130968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.131493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.131520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.131955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.131964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.132520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.132547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.132762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.132774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.133107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.133116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.133579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.133587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.134013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.134021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.134458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.134466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.134884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.134892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.135446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.135473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.135939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.135947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.136485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.136513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.136949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.136958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.137497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.137525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.138002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.138011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.138620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.138647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.139085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.068 [2024-07-25 07:36:48.139093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.068 qpair failed and we were unable to recover it. 00:30:41.068 [2024-07-25 07:36:48.139436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.139444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.139909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.139916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.140520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.140547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.140986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.140994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.141487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.141514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.141950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.141959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.142495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.142523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.142958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.142966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.143501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.143528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.143968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.143976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.144500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.144528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.144971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.144980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.145518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.145546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.145984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.145996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.146510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.146538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.146975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.146983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.147538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.147566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.148001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.148010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.148537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.148566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.148997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.149006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.149519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.149547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.149982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.149991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.150528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.150555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.150986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.150994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.151350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.151358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.151653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.151661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.152076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.152083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.152505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.152512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.152929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.152935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.153356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.153363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.153567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.153578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.154017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.154024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.154450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.154457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.154882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.154888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.155304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.155312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.155711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.155718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.156158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.069 [2024-07-25 07:36:48.156165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.069 qpair failed and we were unable to recover it. 00:30:41.069 [2024-07-25 07:36:48.156671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.156679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.157094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.157100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.157529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.157536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.157972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.157979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.158491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.158519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.158873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.158881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.159302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.159309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.159767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.159774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.160243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.160250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.160715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.160721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.161141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.161147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.161620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.161627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.162047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.162054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.162563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.162591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.162924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.162932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.163480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.163942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.163954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.164461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.164488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.164924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.164933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.165357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.165365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.165796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.165803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 286137 Killed "${NVMF_APP[@]}" "$@" 00:30:41.070 [2024-07-25 07:36:48.166246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.166253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.166502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.166508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:41.070 [2024-07-25 07:36:48.166949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.166957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:41.070 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:41.070 [2024-07-25 07:36:48.167404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.167411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.070 [2024-07-25 07:36:48.167764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.167771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.070 [2024-07-25 07:36:48.168195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.168206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.168634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.168641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.169062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.169069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.169583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.169611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.170049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.170058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.170591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.170619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.171057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.171065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.070 [2024-07-25 07:36:48.171612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.070 [2024-07-25 07:36:48.171639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.070 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.172078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.172086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.172434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.172460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.172918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.172928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.173466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.173495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.173951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.173960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.174523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.174551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.175001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.175013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=287033 00:30:41.071 [2024-07-25 07:36:48.175547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.175575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 287033 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 287033 ']' 00:30:41.071 [2024-07-25 07:36:48.176054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.176064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:41.071 [2024-07-25 07:36:48.176608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.176635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:41.071 [2024-07-25 07:36:48.177108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.177118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 07:36:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.071 [2024-07-25 07:36:48.177687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.177715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.178154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.178163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.178665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.178673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.179098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.179105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.179666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.179693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.180144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.180155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.180612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.180640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.181113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.181122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.181593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.181601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.181941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.181948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.182405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.182433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.182869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.182879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.183323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.183331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.183775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.183782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.184241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.184249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.071 [2024-07-25 07:36:48.184583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.071 [2024-07-25 07:36:48.184591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.071 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.185021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.185028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.185519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.185530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.185741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.185753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.186222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.186230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.186588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.186595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.187060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.187067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.187500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.187507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.187878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.187885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.188336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.188343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.188851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.188858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.189273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.189281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.189727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.189733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.190023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.190029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.190458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.190465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.190888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.190895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.191317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.191325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.191792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.191799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.192219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.192227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.192667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.192674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.193099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.193106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.193565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.193573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.193995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.194002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.194350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.194357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.194794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.194801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.195100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.195108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.195534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.195541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.195886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.195893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.196318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.196325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.196762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.196769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.197188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.197195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.197545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.197552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.198034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.198041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.198570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.198597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.199036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.072 [2024-07-25 07:36:48.199046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.072 qpair failed and we were unable to recover it. 00:30:41.072 [2024-07-25 07:36:48.199650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.199678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.200146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.200154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.200651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.200678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.201044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.201052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.201601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.201629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.202102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.202112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.202598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.202627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.203128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.203140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.203653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.203661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.204090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.204097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.204395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.204403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.204857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.204864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.205212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.205219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.205443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.205450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.205644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.205654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.205958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.205966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.206395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.206404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.206879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.206886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.207326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.207333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.207793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.207800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.208311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.208318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.208771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.208778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.208999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.209008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.209400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.209407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.209629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.209636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.210091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.210097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.210447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.210454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.210944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.210951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.211207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.211214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.211526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.211534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.211984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.211991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.212364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.212371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.212845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.212852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.213154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.213160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.213622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.213629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.213921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.073 [2024-07-25 07:36:48.213928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.073 qpair failed and we were unable to recover it. 00:30:41.073 [2024-07-25 07:36:48.214495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.214523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.214973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.214982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.215515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.215543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.215891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.215900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.216373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.216380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.216834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.216841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.217276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.217283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.217582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.217589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.218048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.218055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.218403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.218410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.218878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.218885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.219287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.219298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.219738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.219745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.220174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.220182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.220625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.220632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.221057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.221065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.221605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.221633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.222144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.222153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.222622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.222650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.223106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.223115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.223565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.223573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.224011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.224018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.224484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.224511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.224953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.224962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.225499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.225527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.225984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.225993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.226605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.226634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.226728] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:30:41.074 [2024-07-25 07:36:48.226773] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.074 [2024-07-25 07:36:48.227125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.227134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.227589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.227596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.227956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.227964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.228537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.228564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.229075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.229085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.074 qpair failed and we were unable to recover it. 00:30:41.074 [2024-07-25 07:36:48.229561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.074 [2024-07-25 07:36:48.229570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.229793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.229801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.230273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.230281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.230743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.230751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.231211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.231219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.231670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.231679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.232147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.232154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.232669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.232677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.232793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.232800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.233211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.233219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.233800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.233808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.234257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.234265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.234716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.234724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.235172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.235180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.235635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.235643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.236102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.236109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.236467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.236475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.236912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.236920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.237391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.237399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.237619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.237626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.238086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.238093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.238566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.238574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.238951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.238959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.239403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.239410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.239853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.239860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.240166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.240174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.240637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.240645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.240925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.240933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.241411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.241438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.241660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.241671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.242127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.242135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.242591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.242602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.243129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.243136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.243581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.243588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.243807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.243817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.244269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.075 [2024-07-25 07:36:48.244277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.075 qpair failed and we were unable to recover it. 00:30:41.075 [2024-07-25 07:36:48.244725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.244732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.245062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.245069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.245550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.245558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.245894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.245901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.246338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.246345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.246775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.246782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.247214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.247223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.247641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.247648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.248075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.248081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.248527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.248554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.249070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.249079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.249595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.249622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.250060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.250068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.250652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.250679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.251149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.251158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.251772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.251799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.252075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.252083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.252612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.252640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.252739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.252749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.252975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.252984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.253507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.253516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.253718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.253726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.254209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.254217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.254445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.254451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.254902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.254910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.255341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.255348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.255780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.255787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.256210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.256217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.256661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.256667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.076 qpair failed and we were unable to recover it. 00:30:41.076 [2024-07-25 07:36:48.257114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.076 [2024-07-25 07:36:48.257121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.257556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.257563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.257986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.257992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.258397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.258404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.258916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.258922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.077 [2024-07-25 07:36:48.259348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.259355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.259853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.259862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.260397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.260425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.260960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.260968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.261485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.261513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.261962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.261973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.262547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.262575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.263060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.263069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.263595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.263623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.263955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.263964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.264504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.264532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.264981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.264989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.265590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.265618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.266059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.266068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.266659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.266686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.267138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.267146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.267678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.267705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.268145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.268154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.268602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.268630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.269073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.269082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.269660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.269688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.270024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.270033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.270572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.270600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.271041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.271050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.271596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.271624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.271868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.271877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.272444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.272471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.272689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.272700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.273144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.077 [2024-07-25 07:36:48.273151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.077 qpair failed and we were unable to recover it. 00:30:41.077 [2024-07-25 07:36:48.273637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.273644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.273725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.273731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.274147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.274154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.274594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.274602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.275024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.275031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.275469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.275475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.275896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.275902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.276408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.276435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.276946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.276954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.277487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.277514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.277954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.277962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.278505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.278533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.279001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.279013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.279554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.279582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.280038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.280046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.280576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.280604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.281052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.281061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.281593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.281621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.282068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.282076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.282438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.282465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.282972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.282981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.283523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.283551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.284001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.284009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.284536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.284563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.284946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.284955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.285421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.285449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.285919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.285928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.286480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.286507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.286986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.286995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.287550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.287578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.287967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.287976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.288540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.288568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.289011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.289019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.289554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.289581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.078 qpair failed and we were unable to recover it. 00:30:41.078 [2024-07-25 07:36:48.290029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.078 [2024-07-25 07:36:48.290038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.290559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.290586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.291063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.291071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.291544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.291572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.292025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.292034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.292592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.292619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.293104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.293112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.293645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.293673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.294124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.294133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.294578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.294586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.295030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.295037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.295576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.295603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.296057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.296066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.296563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.296591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.297069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.297610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.297638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.298177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.298186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.298753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.298781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.299417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.299447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.299895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.299903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.300454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.300483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.301003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.301011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.301502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.301530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.301981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.301990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.302477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.302504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.302967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.302976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.303195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.303213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.303662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.303670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.303984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.303991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.304542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.304570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.305109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.305118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.305659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.305687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.305913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.305924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.306075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.306082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.306601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.306609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.306913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.306921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.079 [2024-07-25 07:36:48.307220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.079 [2024-07-25 07:36:48.307235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.079 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.307694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.307701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.308090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.308097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.308618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.308625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.308836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.308846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.309281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.309289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.309700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.309707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.310158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.310164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.310505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.310512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.310942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.310949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.311255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.311262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.311514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.080 [2024-07-25 07:36:48.311717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.311724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.312176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.312183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.312624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.312632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.312980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.312988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.313427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.313434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.313759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.313766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.314211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.314219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.314669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.314676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.315108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.315115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.315568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.315575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.316002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.316009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.316525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.316553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.316791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.316799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.317250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.317258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.317595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.317603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.318058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.318065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.318302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.318310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.318772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.318778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.319208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.319215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.319633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.319640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.320092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.320098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.320553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.320561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.320992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.320999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.080 [2024-07-25 07:36:48.321454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.080 [2024-07-25 07:36:48.321482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.080 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.321932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.321945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.322524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.322553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.322973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.322981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.323440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.323467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.323915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.323923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.324498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.324526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.324972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.324980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.325534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.325561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.325869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.325878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.326314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.326322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.326749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.326757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.327070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.327077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.327539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.327546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.327971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.327978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.328193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.328204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.328688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.328716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.329154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.329163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.329747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.329775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.330119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.330128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.330442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.330450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.330814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.330823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.331309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.331317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.331796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.331802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.332264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.332270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.332756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.332764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.333204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.333213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.333649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.333655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.334117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.334124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.334246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.334254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.334589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.334597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.081 [2024-07-25 07:36:48.334926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.081 [2024-07-25 07:36:48.334934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.081 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.335319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.335327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.335770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.335777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.336109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.336116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.336546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.336553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.336795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.336801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.337242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.337249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.337383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.337390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.337790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.337797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.338218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.338225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.338650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.338659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.339113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.339120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.339547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.339554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.339979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.339986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.340488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.340495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.340761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.340767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.341210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.341216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.341631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.341639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.341760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.341766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.341980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.341987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.342416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.342423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.342796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.342803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.343257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.343264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.343778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.343785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.344210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.344217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.344720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.344727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.345150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.345156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.345665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.345672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.346002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.346010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.346443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.346451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.346873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.346880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.347321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.347329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.347750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.347757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.348180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.348186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.348352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.348370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.348862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.348870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.349292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.349300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.349750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.349757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.082 qpair failed and we were unable to recover it. 00:30:41.082 [2024-07-25 07:36:48.350218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.082 [2024-07-25 07:36:48.350226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.350681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.350688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.351134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.351141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.351585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.351593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.351784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.351799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.352260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.352268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.352592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.352599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.352897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.352904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.353325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.353333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.353743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.353750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.354089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.354096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.354312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.354323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.354734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.354744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.355170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.355178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.355502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.355509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.355716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.355726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.355930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.355939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.356393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.356400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.356831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.356839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.357263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.357271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.357743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.357750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.358086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.358093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.358533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.358541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.358957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.358963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.359429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.359448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.359865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.359873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.360316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.360324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.360820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.360828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.361287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.361294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.361768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.361774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.362210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.362218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.362663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.362669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.363089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.363095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.363521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.363528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.363864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.363871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.364296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.364304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.364734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.364740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.083 [2024-07-25 07:36:48.365214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.083 [2024-07-25 07:36:48.365221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.083 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.365635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.365642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.366060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.366066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.366288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.366295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.366727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.366733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.367159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.367166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.367617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.367624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.367960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.367968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.368407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.368414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.368830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.368836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.369253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.369261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.369585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.369592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.369932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.369939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.370353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.370360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.370787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.370794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.371215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.371222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.371538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.371544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.371963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.371970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.372396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.372403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.372789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.372796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.373247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.373254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.373684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.373691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.374114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.374121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.374280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.374288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.374696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.374703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.375116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.375122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.375571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.375578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.375805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.375812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.376380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.376388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.376655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.084 [2024-07-25 07:36:48.376682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.084 [2024-07-25 07:36:48.376690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.084 [2024-07-25 07:36:48.376696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.084 [2024-07-25 07:36:48.376702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.084 [2024-07-25 07:36:48.376814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.376821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.377273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.377281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.377267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:30:41.084 [2024-07-25 07:36:48.377385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:30:41.084 [2024-07-25 07:36:48.377517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:30:41.084 [2024-07-25 07:36:48.377519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:30:41.084 [2024-07-25 07:36:48.377746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.377753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.378175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.378181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.378611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.378619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.379051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.084 [2024-07-25 07:36:48.379058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.084 qpair failed and we were unable to recover it. 00:30:41.084 [2024-07-25 07:36:48.379578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.379607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.380042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.380050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.380573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.380601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.381038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.381046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.381610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.381637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.382087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.382096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.382505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.382513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.382814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.382822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.383167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.383174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.383618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.383625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.384047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.384054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.384622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.384650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.385147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.385156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.385742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.385769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.386215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.386225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.386640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.386648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.387097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.387105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.387563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.387573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.387805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.387811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.388246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.388254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.388672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.388679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.389108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.389114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.389389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.389396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.389878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.389885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.390317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.390324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.390603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.390610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.390939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.390948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.391304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.391313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.391761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.391769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.392266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.392273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.392662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.392669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.393109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.393115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.393504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.393512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.393940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.393947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.394371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.394379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.394807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.394814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.395240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.395248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.395698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.085 [2024-07-25 07:36:48.395705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.085 qpair failed and we were unable to recover it. 00:30:41.085 [2024-07-25 07:36:48.396030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.396037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.396370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.396378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.396691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.396699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.397117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.397124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.397562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.397570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.397774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.397789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.398238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.398247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.398671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.398677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.399118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.399126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.399406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.399415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.399842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.399849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.400128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.400135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.400608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.400615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.400961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.400968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.401240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.401247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.401783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.401790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.401993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.402002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.402319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.402327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.402483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.402490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.402815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.402825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.403245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.403252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.403683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.403690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.403984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.403990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.404419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.404427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.404848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.404855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.405355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.405362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.405579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.405588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.406024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.406031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.406461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.406468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.406903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.406910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.407333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.086 [2024-07-25 07:36:48.407340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.086 qpair failed and we were unable to recover it. 00:30:41.086 [2024-07-25 07:36:48.407765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.407771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.408106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.408112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.408455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.408461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.408764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.408772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.409211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.409218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.409765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.409771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.410063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.410069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.410590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.410598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.411023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.411031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.411558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.411585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.412106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.412114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.412545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.412552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.412978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.412985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.413519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.413547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.413898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.413906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.414453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.414481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.414952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.414960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.415481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.415509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.415951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.415960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.416409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.416436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.416908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.416917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.417443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.417470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.417913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.417921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.418262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.418270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.418703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.418710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.418951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.418958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.419262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.419270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.419726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.419733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.420035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.420046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.420472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.420479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.420836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.420843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.421314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.421321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.421666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.421674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.422135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.422142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.422594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.422601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.423034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.423040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.423594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.423622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.087 [2024-07-25 07:36:48.424105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.087 [2024-07-25 07:36:48.424114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.087 qpair failed and we were unable to recover it. 00:30:41.088 [2024-07-25 07:36:48.424397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.088 [2024-07-25 07:36:48.424405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.088 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-25 07:36:48.424860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.360 [2024-07-25 07:36:48.424868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-25 07:36:48.425291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.360 [2024-07-25 07:36:48.425298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-25 07:36:48.425596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.360 [2024-07-25 07:36:48.425603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-25 07:36:48.426007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.360 [2024-07-25 07:36:48.426015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-25 07:36:48.426467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.360 [2024-07-25 07:36:48.426475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-25 07:36:48.426822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.360 [2024-07-25 07:36:48.426830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.427267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.427274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.427608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.427616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.428060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.428067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.428492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.428500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.428922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.428929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.429390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.429417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.429930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.429938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.430490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.430518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.430745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.430753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.431217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.431224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.431734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.431741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.432171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.432177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.432403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.432411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.432830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.432837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.433269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.433277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.433358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.433364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.433589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.433595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.433943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.433950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.434390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.434397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.434819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.434826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.435249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.435256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.435583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.435590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.436016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.436022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.436363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.436372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.436716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.436724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.437159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.437167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.437644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.437651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.438068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.438074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.438521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.438549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.438988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.438997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.439545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.439573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.440039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.440048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.440421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.440449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.440710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.440718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.441171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.441179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.441641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.441649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-25 07:36:48.442073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.361 [2024-07-25 07:36:48.442081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.442517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.442524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.442760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.442766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.443238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.443244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.443691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.443698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.444122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.444130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.444625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.444632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.445053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.445060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.445665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.445693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.446144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.446153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.446372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.446379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.446619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.446626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.447097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.447104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.447568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.447576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.448043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.448050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.448579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.448606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.448921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.448929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.449358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.449366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.449791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.449798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.450235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.450243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.450676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.450683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.451104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.451111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.451549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.451557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.451800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.451807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.452249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.452256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.452702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.452710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.453187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.453194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.453617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.453626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.454127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.454133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.454554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.454561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.454904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.454911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.455378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.455385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.455680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.455687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.456108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.456114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.456554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.456561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.456982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.456989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.457224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.457231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.457693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.457700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.362 qpair failed and we were unable to recover it. 00:30:41.362 [2024-07-25 07:36:48.458125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.362 [2024-07-25 07:36:48.458132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.458452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.458459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.458689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.458697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.459091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.459098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.459551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.459559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.459979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.459986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.460407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.460414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.460761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.460768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.461190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.461197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.461618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.461624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.462047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.462053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.462448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.462476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.462719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.462728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.463177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.463185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.463658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.463665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.464166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.464173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.464607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.464615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.464830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.464837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.465398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.465426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.465878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.465886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.466131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.466142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.466559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.466567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.466994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.467001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.467289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.467296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.467675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.467683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.468148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.468155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.468482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.468490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.468961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.468968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.469388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.469395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.469819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.469829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.470164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.470171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.470614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.470621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.471041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.471048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.471589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.471617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.472109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.472118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.472568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.472576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.473021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.473028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.473546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.363 [2024-07-25 07:36:48.473574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.363 qpair failed and we were unable to recover it. 00:30:41.363 [2024-07-25 07:36:48.474048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.474057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.474603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.474631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.475071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.475080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.475450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.475478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.475952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.475960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.476475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.476503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.476755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.476763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.476980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.476986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.477413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.477421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.477846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.477854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.478075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.478082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.478517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.478524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.478742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.478748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.479195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.479209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.479667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.479674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.480103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.480109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.480563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.480570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.480994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.481001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.481551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.481580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.482053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.482062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.482594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.482621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.483066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.483075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.483603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.483631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.484101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.484109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.484538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.484565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.485021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.485030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.485567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.485594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.485900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.485908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.486427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.486454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.486898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.486906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.487249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.487257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.487367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.487378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.487853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.487861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.488308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.488315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.488775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.488781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.489241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.489248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.489595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.489602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.364 [2024-07-25 07:36:48.490047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.364 [2024-07-25 07:36:48.490054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.364 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.490388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.490395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.490840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.490847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.491274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.491281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.491721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.491727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.492150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.492157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.492591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.492598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.492812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.492823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.493270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.493277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.493362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.493368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.493769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.493776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.494242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.494250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.494739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.494746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.494949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.494957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.495407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.495414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.495710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.495716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.496136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.496143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.496475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.496482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.496904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.496911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.497253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.497260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.497699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.497706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.498157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.498164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.498371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.498377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.498820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.498826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.499249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.499256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.499681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.499687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.500115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.500123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.500624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.500631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.501057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.501063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.501486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.365 [2024-07-25 07:36:48.501492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.365 qpair failed and we were unable to recover it. 00:30:41.365 [2024-07-25 07:36:48.501789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.501796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.502096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.502103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.502548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.502554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.502981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.502988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.503342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.503349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.503815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.503822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.504295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.504301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.504749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.504755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.505184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.505190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.505616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.505623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.506087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.506094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.506550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.506557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.506978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.506984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.507512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.507539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.507783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.507792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.508243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.508251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.508471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.508477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.508801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.508808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.509033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.509044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.509540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.509548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.509969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.509976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.510398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.510405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.510717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.510723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.511146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.511152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.511373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.511380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.511561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.511569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.511811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.511817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.512317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.512323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.512776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.512782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.513243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.513250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.513598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.513605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.514061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.514071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.514518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.514525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.514873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.514880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.515324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.515331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.515628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.515634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.515985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.515992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.516447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.366 [2024-07-25 07:36:48.516454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.366 qpair failed and we were unable to recover it. 00:30:41.366 [2024-07-25 07:36:48.516876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.516882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.517311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.517318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.517794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.517800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.518224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.518231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.518580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.518588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.519027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.519033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.519453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.519460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.519884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.519890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.520324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.520331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.520771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.520778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.520965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.520972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.521451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.521458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.521885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.521891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.522312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.522319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.522835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.522842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.523273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.523280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.523709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.523716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.524139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.524145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.524584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.524591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.525033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.525040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.525221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.525236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.525679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.525685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.525906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.525913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.526366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.526373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.526814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.526821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.527258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.527265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.527590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.527597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.528019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.528025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.528241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.528248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.528475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.528481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.528942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.528948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.529050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.529057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.529502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.529509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.529936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.529945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.530244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.530251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.530680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.530686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.531110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.531117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.367 [2024-07-25 07:36:48.531468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.367 [2024-07-25 07:36:48.531475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.367 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.531917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.531924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.532294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.532301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.532750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.532756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.533174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.533182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.533620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.533627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.534071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.534078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.534631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.534658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.534965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.534973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.535494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.535522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.535970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.535978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.536503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.536530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.536774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.536782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.537231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.537238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.537739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.537746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.538179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.538186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.538617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.538624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.538842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.538853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.539307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.539315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.539788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.539794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.540223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.540230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.540489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.540496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.540913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.540920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.541259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.541266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.541543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.541549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.542004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.542011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.542431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.542438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.542665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.542671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.542878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.542884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.543298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.543305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.543746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.543753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.544186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.544192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.544653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.544659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.544956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.544962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.545384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.545395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.545831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.545837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.546302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.546311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.546747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.368 [2024-07-25 07:36:48.546753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.368 qpair failed and we were unable to recover it. 00:30:41.368 [2024-07-25 07:36:48.547180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.547188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.547630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.547637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.547850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.547857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.548300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.548307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.548763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.548770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.549221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.549228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.549418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.549426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.549910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.549916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.550365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.550372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.550704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.550711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.551134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.551140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.551574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.551581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.552008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.552015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.552446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.552452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.552664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.552674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.553001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.553007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.553443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.553450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.553753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.553759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.554189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.554195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.554709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.554716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.554914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.554922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.555383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.555390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.555604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.555610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.556061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.556067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.556490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.556497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.556922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.556929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.557515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.557542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.557858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.557866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.558101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.558109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.558539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.558546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.558903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.558910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.559350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.559356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.559799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.559806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.560233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.560240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.560671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.560678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.561107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.561114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.561446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.561453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.561868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.369 [2024-07-25 07:36:48.561874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.369 qpair failed and we were unable to recover it. 00:30:41.369 [2024-07-25 07:36:48.562304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.562314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.562755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.562762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.563059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.563066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.563512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.563519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.563981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.563987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.564515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.564543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.564986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.564994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.565392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.565419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.565931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.565940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.566469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.566497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.566942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.566950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.567482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.567517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.567967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.567975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.568421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.568448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.568889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.568898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.569465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.569492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.569725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.569733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.570185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.570192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.570634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.570640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.570860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.570867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.571311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.571318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.571545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.571551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.571773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.571780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.572208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.572215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.572664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.572671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.573094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.573101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.573550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.573557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.573991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.573998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.574334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.574341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.574771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.574778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.575213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.575220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.575439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.370 [2024-07-25 07:36:48.575447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.370 qpair failed and we were unable to recover it. 00:30:41.370 [2024-07-25 07:36:48.575900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.575906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.576331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.576339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.576767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.576775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.577198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.577209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.577503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.577510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.577977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.577984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.578517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.578545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.578985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.578993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.579547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.579578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.579888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.579897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.580354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.580361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.580790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.580797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.581225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.581232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.581691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.581698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.582122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.582130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.582578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.582585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.582809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.582815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.583239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.583246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.583347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.583354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.583490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.583500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.584040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.584047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.584258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.584265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.584777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.584784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.584988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.584997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.585457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.585466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.585812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.585819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.586023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.586030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.586475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.586482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.586901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.586908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.587331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.587339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.587766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.587773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.588214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.588221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.588541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.588547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.589016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.589023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.589443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.589450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.589873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.589879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.590301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.590309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.371 qpair failed and we were unable to recover it. 00:30:41.371 [2024-07-25 07:36:48.590738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.371 [2024-07-25 07:36:48.590745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.591180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.591187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.591651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.591659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.591970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.591977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.592598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.592626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.593068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.593077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.593610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.593637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.594082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.594091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.594596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.594604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.595033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.595040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.595579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.595607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.596045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.596056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.596462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.596490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.596965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.596974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.597077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.597084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.597500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.597507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.597934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.597942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.598406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.598413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.598830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.598837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.599275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.599282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.599759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.599766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.600206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.600213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.600661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.600668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.601093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.601099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.601548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.601556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.602037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.602044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 [2024-07-25 07:36:48.602131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.372 [2024-07-25 07:36:48.602142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.372 qpair failed and we were unable to recover it. 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Write completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 Read completed with error (sct=0, sc=8) 00:30:41.372 starting I/O failed 00:30:41.372 [2024-07-25 07:36:48.602880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:41.372 [2024-07-25 07:36:48.603599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.603686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.604023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.604059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.604497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.604584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.605153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.605190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.605703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.605744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.606210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.606241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.606707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.606736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.606994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.607022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.607602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.607690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.608440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.608528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.608977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.609016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.609389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.609421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.609906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.609934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.610295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.610329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.610811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.610840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.611306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.611336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.611732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.611760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.611957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.611985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.612386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.612422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.612960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.612988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.613464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.613493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.613945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.613973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.614448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.614477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.614967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.614994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.615403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.615491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.615848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.615883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.616157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.616185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.616688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.616718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.617211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.617241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.617546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.617574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.618124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.618152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.618548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.618578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.619079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.619107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.619591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.619621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.619891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.619919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.620258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.620286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.620762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.620790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.373 [2024-07-25 07:36:48.621067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.373 [2024-07-25 07:36:48.621095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.373 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.621579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.621608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.622102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.622130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.622608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.622637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.623102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.623130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.623387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.623416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.623889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.623916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.624399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.624441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.624929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.624956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.625294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.625323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.625814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.625842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.626251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.626292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.626780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.626808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.627172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.627214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.627714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.627743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.628222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.628251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.628746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.628774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.629248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.629276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.629659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.629686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.630151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.630179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.630674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.630703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.631259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.631288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.631755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.631782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.632115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.632142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.632626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.632655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.633124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.633152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.633432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.633463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.633703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.374 [2024-07-25 07:36:48.633731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.374 qpair failed and we were unable to recover it. 00:30:41.374 [2024-07-25 07:36:48.634097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.634125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d94000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.634246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189f20 is same with the state(5) to be set 00:30:41.375 [2024-07-25 07:36:48.634652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.634680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.635192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.635206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.635720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.635747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.636187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.636195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.636603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.636644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.637078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.637086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.637616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.637644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.638094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.638102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.638427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.638455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.638936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.638945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.639123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.639134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.639636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.639644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.639861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.639871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.639958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.639965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.640244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.640251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.640685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.640691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.641154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.641161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.641611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.641618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.642087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.642095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.642594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.642601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.643023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.643029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.643134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.643141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.643551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.643558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.643979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.643986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.644493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.644500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.644927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.644933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.645472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.645500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.646035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.646044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.646551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.646578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.646828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.646836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.647319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.647326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.647780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.647787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.648260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.648267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.648523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.648530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.649005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.649011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.649312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.649320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.375 [2024-07-25 07:36:48.649828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.375 [2024-07-25 07:36:48.649834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.375 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.650255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.650262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.650680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.650687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.650787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.650793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.651100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.651106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.651548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.651555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.651988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.651996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.652495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.652501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.652715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.652730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.653184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.653191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.653530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.653537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.653771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.653781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.653972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.653980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.654396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.654404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.654846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.654853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.655350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.655357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.655570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.655576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.655685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.655692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.656007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.656014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.656441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.656448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.656872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.656879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.657087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.657095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.657202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.657209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.657662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.657669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.658091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.658098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.658451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.658459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.658907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.658913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.659336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.659343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.659789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.659795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.660217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.660224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.660649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.660655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.661039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.661046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.661460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.661466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.661884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.661891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.662216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.662223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.662651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.662657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.662998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.663005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.663443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.663450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.376 qpair failed and we were unable to recover it. 00:30:41.376 [2024-07-25 07:36:48.663688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.376 [2024-07-25 07:36:48.663695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.664137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.664144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.664439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.664446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.664873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.664879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.665114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.665120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.665567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.665573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.665994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.666001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.666342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.666349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.666454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.666461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.666593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.666600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.667052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.667060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.667295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.667301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.667538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.667545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.668024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.668031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.668455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.668462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.668886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.668892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.669314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.669320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.669537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.669544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.669741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.669748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.669940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.669947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.670377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.670384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.670831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.670837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.671139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.671145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.671402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.671409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.671833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.671840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.672263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.672270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.672592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.672599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.672813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.672820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.673261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.673267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.673602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.673609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.674046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.674052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.674534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.674541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.674879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.674886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.675087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.675094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.675377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.675384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.675813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.675820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.676244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.676250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.377 [2024-07-25 07:36:48.676674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.377 [2024-07-25 07:36:48.676681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.377 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.677031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.677037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.677457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.677464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.677896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.677902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.678400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.678407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.678606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.678615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.679043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.679049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.679495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.679501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.679799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.679805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.680228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.680235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.680474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.680480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.680557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.680565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.681025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.681032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.681459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.681469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.681890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.681896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.682130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.682137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.682451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.682459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.682669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.682678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.683124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.683130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.683645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.683652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.684065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.684072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.684374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.684380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.684823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.684830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.685250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.685256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.685707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.685714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.685948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.685954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.686258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.686265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.686703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.686709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.687007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.687014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.687452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.687459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.687892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.687898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.688326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.688333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.688737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.688743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.689160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.689167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.689496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.689503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.689955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.689962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.690396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.690403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.690834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.690840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.378 qpair failed and we were unable to recover it. 00:30:41.378 [2024-07-25 07:36:48.691409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.378 [2024-07-25 07:36:48.691416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.691704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.691710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.692139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.692147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.692382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.692388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.692718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.692724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.693020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.693026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.693364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.693371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.693669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.693676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.694107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.694114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.694478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.694485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.694906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.694913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.695416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.695423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.695837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.695843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.696264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.696271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.696346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.696355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.696787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.696793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.697022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.697028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.697486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.697493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.697917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.697924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.698157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.698164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.698602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.698608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.698814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.698821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.699172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.699178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.699536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.699543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.700002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.700008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.700332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.700338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.700778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.700785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.701205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.701213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.701637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.701644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.702071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.702077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.702495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.702522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.702961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.702969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.379 qpair failed and we were unable to recover it. 00:30:41.379 [2024-07-25 07:36:48.703518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.379 [2024-07-25 07:36:48.703546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.703988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.703996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.704548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.704575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.704781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.704791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.705012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.705019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.705231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.705246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.705670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.705677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.706112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.706119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.706557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.706564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.706801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.706807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.707262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.707272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.707691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.707697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.708126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.708133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.708572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.708579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.709029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.709036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.709369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.709376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.709613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.709620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.710045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.710052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.710475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.710483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.710903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.710910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.711128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.711134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.711239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.711245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.711679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.711686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.712133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.712140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.712570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.712577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.712874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.712881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.713305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.713311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.713744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.713751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.714178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.714185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.714398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.714405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.714598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.714605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.715023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.715029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.715483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.715490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.715909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.715916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.716253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.716260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.380 [2024-07-25 07:36:48.716579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.380 [2024-07-25 07:36:48.716586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.380 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.717009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.717017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.717488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.717495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.717837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.717844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.718285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.718292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.718698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.718706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.719125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.719132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.719581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.719588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.720017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.720023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.720540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.720568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.720947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.720956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.721490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.721518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.721961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.721970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.722188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.722195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.722647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.722654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.723163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.723173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.652 qpair failed and we were unable to recover it. 00:30:41.652 [2024-07-25 07:36:48.723530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.652 [2024-07-25 07:36:48.723557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.723906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.723915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.724472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.724499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.724944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.724952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.725123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.725131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.725581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.725588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.726015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.726021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.726382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.726410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.726892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.726901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.727478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.727506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.727815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.727824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.728248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.728255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.728681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.728688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.729164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.729170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.729610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.729617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.730057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.730063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.730580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.730607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.731046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.731055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.731671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.731699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.732177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.732186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.732720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.732748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.733184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.733193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.733598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.733625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.734064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.734073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.734375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.734402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.734630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.734639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.735139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.735146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.735454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.735461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.735781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.735788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.736224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.736231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.736528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.736535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.736719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.736725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.737224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.737231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.737754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.737760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.738099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.738105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.738553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.738559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.738992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.738999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.653 qpair failed and we were unable to recover it. 00:30:41.653 [2024-07-25 07:36:48.739298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-07-25 07:36:48.739305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.739840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.739846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.740077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.740085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.740280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.740287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.740643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.740651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.741076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.741083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.741545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.741551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.741974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.741980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.742187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.742197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.742676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.742683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.743145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.743152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.743396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.743403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.743595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.743604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.744020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.744026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.744448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.744455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.744875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.744882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.745304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.745311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.745701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.745708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.746144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.746151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.746609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.746615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.747056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.747062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.747485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.747514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.747994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.748002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.748459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.748486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.748925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.748933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.749125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.749132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.749589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.749597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.750030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.750037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.750557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.750585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.751028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.751036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.751403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.751430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.751909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.751917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.752456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.752484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.752926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.752935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.753146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.753152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.753596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.753602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.754025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.754032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.654 [2024-07-25 07:36:48.754550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.654 [2024-07-25 07:36:48.754577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.654 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.754801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.754810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.755267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.755274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.755663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.755669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.756014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.756021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.756329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.756339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.756807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.756814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.757048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.757055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.757504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.757511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.757936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.757942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.758460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.758488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.758591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.758598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.758998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.759005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.759225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.759232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.759659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.759665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.760110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.760116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.760567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.760574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.760916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.760923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.761167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.761174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.761620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.761627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.762090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.762097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.762404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.762412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.762737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.762744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.763092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.763099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.763538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.763544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.763965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.763972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.764393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.764400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.764784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.764791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.765224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.765231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.765658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.765665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.765999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.766006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.766447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.766453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.766867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.766874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.767083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.767095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.767285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.767293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.767611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.767618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.768045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.768052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.768267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.655 [2024-07-25 07:36:48.768275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.655 qpair failed and we were unable to recover it. 00:30:41.655 [2024-07-25 07:36:48.768717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.768723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.769080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.769087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.769534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.769541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.770011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.770018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.770241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.770249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.770479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.770486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.770909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.770915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.771347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.771357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.771789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.771796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.772254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.772261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.772704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.772711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.773137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.773143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.773579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.773585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.773882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.773890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.774341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.774348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.774845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.774852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.775296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.775309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.775534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.775540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.776025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.776031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.776370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.776376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.776805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.776811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.777022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.777028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.777228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.777234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.777678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.777684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.778190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.778196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.778628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.778634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.779055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.779061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.779594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.779622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.780062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.780070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.780652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.780679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.781119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.781127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.781614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.781642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.782075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.782084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.782546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.782553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.782878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.782885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.783110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.783120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.783497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.656 [2024-07-25 07:36:48.783506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.656 qpair failed and we were unable to recover it. 00:30:41.656 [2024-07-25 07:36:48.783711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.783718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.784167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.784174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.784656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.784663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.785087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.785094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.785545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.785553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.785996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.786004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.786460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.786467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.786806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.786812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.787218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.787225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.787405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.787421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.787831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.787841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.788046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.788053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.788509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.788516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.788825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.788832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.789282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.789289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.789623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.789629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.790088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.790094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.790441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.790449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.790886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.790893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.791317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.791324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.791640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.791647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.792082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.792088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.792577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.792583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.792970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.792976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.793396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.793403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.793829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.793835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.794302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.794308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.794742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.794748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.795048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.795055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.795288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.795295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.657 qpair failed and we were unable to recover it. 00:30:41.657 [2024-07-25 07:36:48.795770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.657 [2024-07-25 07:36:48.795777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.796194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.796204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.796440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.796446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.796890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.796897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.797133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.797139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.797449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.797456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.797885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.797892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.798313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.798320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.798764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.798771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.799213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.799220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.799638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.799644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.800069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.800076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.800552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.800558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.800805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.800811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.801257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.801264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.801688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.801695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.802023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.802030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.802477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.802484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.802908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.802916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.803264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.803271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.803719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.803727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.804153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.804160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.804659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.804666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.805088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.805095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.805333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.805340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.805637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.805643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.806111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.806118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.806340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.806347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.806753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.806760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.807184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.807191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.807614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.807621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.808045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.808051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.808577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.808605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.809077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.809085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.809598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.809605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.809815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.809822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.810155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.810162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.658 [2024-07-25 07:36:48.810611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.658 [2024-07-25 07:36:48.810617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.658 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.811043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.811049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.811585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.811613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.811859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.811867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.812152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.812159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.812615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.812623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.813036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.813042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.813553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.813581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.814052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.814060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.814585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.814612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.814964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.814972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.815487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.815515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.816017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.816025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.816424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.816451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.816921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.816929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.817476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.817503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.818030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.818039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.818558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.818585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.818918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.818927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.819491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.819519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.819764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.819773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.820231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.820238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.820495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.820502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.820940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.820950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.821165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.821171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.821694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.821701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.822135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.822142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.822586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.822592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.823019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.823026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.823551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.823578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.823811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.823819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.824016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.824026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.824495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.824502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.824932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.824939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.825392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.825399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.825501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.825507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.825847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.825854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.826276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.659 [2024-07-25 07:36:48.826283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.659 qpair failed and we were unable to recover it. 00:30:41.659 [2024-07-25 07:36:48.826622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.826629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.827078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.827085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.827545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.827552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.828054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.828060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.828458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.828465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.828891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.828898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.829206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.829213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.829410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.829420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.829962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.829969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.830404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.830431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.830906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.830915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.831341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.831349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.831799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.831807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.832015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.832023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.832489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.832497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.832715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.832727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.833178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.833185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.833621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.833628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.834049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.834056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.834427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.834454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.834891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.834901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.835460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.835487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.835925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.835935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.836495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.836524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.837005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.837013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.837422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.837453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.837952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.837960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.838430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.838457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.838967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.838977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.839516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.839544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.839793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.839803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.840230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.840238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.840475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.840483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.840910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.840917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.841156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.841163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.841358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.841365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.841792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.841800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.660 qpair failed and we were unable to recover it. 00:30:41.660 [2024-07-25 07:36:48.842222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.660 [2024-07-25 07:36:48.842230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.842673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.842681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.843105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.843112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.843549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.843556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.843978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.843985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.844401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.844409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.844835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.844843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.845084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.845092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.845512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.845520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.845955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.845962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.846386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.846393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.846816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.846823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.847026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.847033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.847471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.847478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.847822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.847830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.848267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.848274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.848711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.848718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.849060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.849067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.849551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.849557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.849980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.849986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.850461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.850488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.850929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.850939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.851378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.851406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.851907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.851917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.852510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.852538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.852979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.852989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.853527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.853555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.854001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.854009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.854547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.854578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.854880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.854888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.855235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.855243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.855698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.855706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.661 [2024-07-25 07:36:48.856159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.661 [2024-07-25 07:36:48.856167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.661 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.856516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.856524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.856975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.856982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.857405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.857413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.857877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.857885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.858100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.858107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.858530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.858537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.858863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.858870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.859293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.859301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.859729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.859735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.860160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.860167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.860606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.860613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.860956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.860963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.861392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.861400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.861841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.861849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.862269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.862277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.862749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.862756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.863223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.863231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.863690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.863697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.864209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.864218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.864420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.864427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.864879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.864886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.865310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.865318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.865657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.865664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.866084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.866091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.866527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.866534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.866832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.866840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.867196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.867207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.867631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.867638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.867851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.867859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.868303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.868310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.868745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.868752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.869052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.869059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.869409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.869416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.869841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.869847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.870300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.870308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.870764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.870772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.871009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.662 [2024-07-25 07:36:48.871015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.662 qpair failed and we were unable to recover it. 00:30:41.662 [2024-07-25 07:36:48.871463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.871469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.871964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.871972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.872542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.872570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.873065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.873073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.873487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.873514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.873985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.873994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.874556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.874583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.874805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.874813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.875265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.875272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.875483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.875494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.875951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.875958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.876171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.876178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.876635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.876642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.876964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.876971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.877335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.877342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.877785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.877792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.878096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.878104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.878637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.878644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.879062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.879068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.879621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.879648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.880088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.880096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.880541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.880548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.880970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.880976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.881490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.881517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.881962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.881970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.882580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.882608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.883078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.883086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.883329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.883336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.883801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.883807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.884105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.884112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.884553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.884560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.884788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.884795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.885092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.885098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.885505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.885512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.885627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.885634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.886137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.886144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.886616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.886622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.663 qpair failed and we were unable to recover it. 00:30:41.663 [2024-07-25 07:36:48.887079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.663 [2024-07-25 07:36:48.887085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.887425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.887435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.887648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.887655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.888092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.888099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.888557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.888564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.888988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.888995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.889215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.889222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.889682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.889689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.889771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.889777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.890189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.890196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.890418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.890425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.890893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.890900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.891362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.891370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.891825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.891832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.892128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.892135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.892644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.892651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.893095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.893102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.893547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.893554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.893977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.893983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.894328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.894336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.894798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.894805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.895246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.895253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.895687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.895694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.896114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.896121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.896439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.896447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.896863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.896870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.897299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.897305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.897662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.897669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.898128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.898135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.898610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.898617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.899039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.899046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.899468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.899475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.899892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.899899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.900402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.900429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.900862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.900871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.901170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.901177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.901607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.901614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.664 [2024-07-25 07:36:48.902047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.664 [2024-07-25 07:36:48.902054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.664 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.902596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.902623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.903095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.903104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.903684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.903712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.904151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.904159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.904401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.904409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.904769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.904775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.905206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.905213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.905450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.905456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.905892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.905899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.906138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.906144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.906443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.906451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.906772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.906779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.907080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.907088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.907603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.907609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.907910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.907916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.908054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.908061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.908299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.908306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.908734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.908742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.909188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.909195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.909623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.909630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.910050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.910056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.910574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.910602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.911128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.911136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.911626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.911634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.911930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.911939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.912531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.912558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.913002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.913011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.913532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.913559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.913802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.913810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.914261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.914269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.914776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.914786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.915212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.915219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.915634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.915641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.916063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.916071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.916475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.916483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.916698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.916704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.917148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.665 [2024-07-25 07:36:48.917154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.665 qpair failed and we were unable to recover it. 00:30:41.665 [2024-07-25 07:36:48.917575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.917582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.917790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.917801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.918251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.918258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.918743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.918749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.919172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.919179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.919614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.919622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.920050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.920057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.920472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.920500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.920807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.920817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.921332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.921339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.921634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.921642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.921716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.921724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.921910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.921917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.922347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.922354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.922578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.922585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.923052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.923058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.923485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.923492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.923817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.923824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.924245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.924253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.924393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.924400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.924697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.924704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.925135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.925143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.925583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.925590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.926018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.926025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.926472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.926479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.926919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.926926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.927348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.927355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.927704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.927711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.928137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.928144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.928579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.928586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.929096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.929103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.929319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.666 [2024-07-25 07:36:48.929327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.666 qpair failed and we were unable to recover it. 00:30:41.666 [2024-07-25 07:36:48.929759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.929767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.930114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.930123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.930403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.930410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.930665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.930672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.931110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.931117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.931544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.931551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.931977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.931984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.932221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.932228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.932692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.932698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.933122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.933129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.933558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.933565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.933991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.933998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.934424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.934432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.934653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.934661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.934903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.934911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.935105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.935113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.935548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.935556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.935993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.935999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.936448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.936455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.936791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.936799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.937243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.937251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.937598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.937605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.938074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.938081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.938502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.938509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.938929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.938936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.939453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.939481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.939790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.939799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.940224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.940232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.940748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.940755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.941180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.941187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.941524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.941532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.941976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.941983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.942461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.942468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.942901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.942908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.943390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.943417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.943889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.943898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.944417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.667 [2024-07-25 07:36:48.944444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.667 qpair failed and we were unable to recover it. 00:30:41.667 [2024-07-25 07:36:48.944750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.944759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.945185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.945192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.945547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.945555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.946026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.946033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.946550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.946581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.946918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.946928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.947482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.947509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.947980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.947989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.948547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.948575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.948880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.948889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.949321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.949328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.949543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.949550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.949970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.949977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.950197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.950208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.950416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.950424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.950746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.950753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.950991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.950998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.951247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.951254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.951705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.951712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.952132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.952139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.952500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.952508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.952938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.952945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.953366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.953374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.953610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.953617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.954152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.954160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.954585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.954592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.955014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.955021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.955368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.955375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.955807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.955814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.956086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.956093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.956518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.956525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.956948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.956955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.957376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.957384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.957845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.957852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.958274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.958281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.958714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.958722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.959143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.959150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.668 qpair failed and we were unable to recover it. 00:30:41.668 [2024-07-25 07:36:48.959582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.668 [2024-07-25 07:36:48.959590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.959799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.959811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.960243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.960251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.960684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.960691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.961109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.961117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.961560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.961567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.961904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.961911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.962340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.962349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.962706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.962713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.963145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.963152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.963386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.963393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.963825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.963832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.964061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.964068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.964521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.964528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.964959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.964967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.965185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.965195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.965633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.965641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.966077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.966084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.966615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.966642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.967084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.967093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.967592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.967620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.968062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.968072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.968496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.968523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.968965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.968976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.969539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.969567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.970013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.970022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.970564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.970591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.970820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.970831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.971248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.971255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.971773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.971779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.972120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.972126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.972431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.972437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.972768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.972775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.973207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.973214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.973538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.973544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.973989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.973996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.974438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.974444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.669 [2024-07-25 07:36:48.974865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.669 [2024-07-25 07:36:48.974871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.669 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.975294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.975300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.975747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.975754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.976048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.976054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.976271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.976278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.976623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.976630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.976872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.976879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.977343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.977350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.977693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.977700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.978122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.978129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.978568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.978577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.979001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.979008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.979429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.979436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.979867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.979873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.980083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.980090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.980528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.980535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.980958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.980964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.981386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.981393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.981860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.981866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.982291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.982298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.982518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.982525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.982982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.982989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.983472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.983478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.984015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.984022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.984413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.984440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.984944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.984952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.985390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.985418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.985725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.985733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.986167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.986174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.986657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.986663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.987131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.987137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.987595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.987602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.988025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.988032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.988470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.988498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.988954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.988962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.989508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.989535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.989918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.989926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.990450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.670 [2024-07-25 07:36:48.990477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.670 qpair failed and we were unable to recover it. 00:30:41.670 [2024-07-25 07:36:48.991001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.991009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.991534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.991561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.992012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.992021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.992573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.992600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.992916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.992924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.993398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.993425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.993899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.993908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.994445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.994473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.994915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.994924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.995354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.995361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.995786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.995793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.996033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.996039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.996467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.996477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.996902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.996908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.997408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.997436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.997879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.997888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.998331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.998338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.998763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.998770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.999193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.999203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.999435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.999442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:48.999756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:48.999762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.000227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.000234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.000568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.000575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.000873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.000880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.001311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.001319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.001772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.001779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.002089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.002098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.002549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.002556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 [2024-07-25 07:36:49.002980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.002987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:41.671 [2024-07-25 07:36:49.003341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.003349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.671 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:41.671 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:41.671 [2024-07-25 07:36:49.003800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.671 [2024-07-25 07:36:49.003807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.671 qpair failed and we were unable to recover it. 00:30:41.672 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.672 [2024-07-25 07:36:49.004148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.004155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.672 [2024-07-25 07:36:49.004376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.004383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.004864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.004871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.005302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.005310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.005795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.005802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.006010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.006016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.006319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.006327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.006812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.006819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.007115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.007121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.007304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.007317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.007792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.007799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.008224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.008232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.008463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.008472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.008709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.008717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.009196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.009214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.009670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.009677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.672 [2024-07-25 07:36:49.009892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.672 [2024-07-25 07:36:49.009901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.672 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.010230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.010239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.010678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.010684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.011116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.011126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.011552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.011559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.011870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.011877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.012328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.012336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.012786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.012794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.013219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.013226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.013739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.013746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.014186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.014193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.014634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.014641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.015068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.015076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.015464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.015492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.015942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.015951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.016183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.016190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.016536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.016543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.016966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.016973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.017424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.017453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.017936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.017945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.018417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.018444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.018892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.018900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.019116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.019123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.938 [2024-07-25 07:36:49.019595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.938 [2024-07-25 07:36:49.019602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.938 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.020022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.020031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.020558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.020586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.020901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.020909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.021208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.021216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.021706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.021713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.022138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.022145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.022692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.022720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.023192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.023206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.023636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.023663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.023905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.023914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.024438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.024466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.024918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.024928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.025476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.025503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.025919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.025928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.026355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.026362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.026811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.026818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.027245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.027254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.027762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.027770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.028210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.028219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.028637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.028646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.029067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.029075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.029379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.029387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.029851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.029858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.030318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.030326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.030654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.030661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.031103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.031110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.031439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.031447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.031794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.031802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.032256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.032263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.032688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.032695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.033117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.033124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.033547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.033554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.033974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.033981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.034281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.034288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.034739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.034746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.035209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.035216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.035626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.939 [2024-07-25 07:36:49.035635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.939 qpair failed and we were unable to recover it. 00:30:41.939 [2024-07-25 07:36:49.035981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.035989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.036401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.036408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.036828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.036835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.037174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.037182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.037521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.037528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.037950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.037957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.038295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.038302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.038776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.038784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.039204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.039212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.039549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.039556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.039995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.040001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.040514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.040541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.041011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.041019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.041564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.041591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.041912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.041921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.042161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.042169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.042403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.042411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.042854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.042861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.043285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.043292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.043396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.043402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.043632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.043638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.940 [2024-07-25 07:36:49.043951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.043960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.044182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.044189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:41.940 [2024-07-25 07:36:49.044621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.940 [2024-07-25 07:36:49.044631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.940 [2024-07-25 07:36:49.045021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.045029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.045469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.045476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.045916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.045923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.046275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.046282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.046635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.046642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.046723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.046729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.047151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.047158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.047629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.047636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.048062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.048069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.048209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.048215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.048668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.048674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.940 [2024-07-25 07:36:49.049087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.940 [2024-07-25 07:36:49.049094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.940 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.049539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.049546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.049959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.049966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.050069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.050075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.050425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.050432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.050909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.050915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.051366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.051373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.051796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.051802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.052017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.052029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.052365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.052372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.052795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.052801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.053222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.053229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.053731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.053738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.054188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.054194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.054618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.054625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.055047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.055054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.055624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.055652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.056150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.056159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.056700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.056728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.057413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.057440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.057749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.057758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.058181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.058188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.058429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.058436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.058872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.058880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.059333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.059342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 Malloc0 00:30:41.941 [2024-07-25 07:36:49.059818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.059829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.060265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.060272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.941 [2024-07-25 07:36:49.060602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.060609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.060811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.060828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:41.941 [2024-07-25 07:36:49.061091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.061099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.941 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.941 [2024-07-25 07:36:49.061600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.061607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.061818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.061825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.062271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.062278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.062798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.062804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.063136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.063142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.063563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.063569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.941 qpair failed and we were unable to recover it. 00:30:41.941 [2024-07-25 07:36:49.063993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.941 [2024-07-25 07:36:49.064000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.064351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.064358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.064840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.064847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.065095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.065102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.065577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.065584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.066006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.066012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.066447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.066454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.066896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.066903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.067129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.942 [2024-07-25 07:36:49.067496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.067524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.067766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.067776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.068221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.068229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.068634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.068640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.068975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.068982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.069286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.069294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.069627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.069634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.070054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.070061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.070309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.070316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.070742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.070748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.071172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.071179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.071419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.071427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.071670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.071677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.071885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.071892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.072401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.072407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.072832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.072839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.073097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.073104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.073502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.073509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.073932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.073939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.074402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.074410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.074708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.074715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.075059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.075066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.075487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.075494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 [2024-07-25 07:36:49.075923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.075929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.942 [2024-07-25 07:36:49.076281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.942 [2024-07-25 07:36:49.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.942 qpair failed and we were unable to recover it. 00:30:41.942 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.942 [2024-07-25 07:36:49.076637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.076644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.943 [2024-07-25 07:36:49.077075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.077082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.943 [2024-07-25 07:36:49.077380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.077388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.077738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.077745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.078174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.078180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.078652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.078659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.079079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.079086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.079547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.079555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.080017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.080023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.080444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.080471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.080908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.080916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.081220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.081237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.081698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.081705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.082136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.082142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.082445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.082452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.082875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.082881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.083302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.083309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.083737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.083743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.084045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.084051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.084272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.084280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.084569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.084576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.084815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.084828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.085246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.085253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.085681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.085688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.085907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.085914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.086315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.086321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.086559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.086566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.087002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.087008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.087523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.087530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.087831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.087838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 [2024-07-25 07:36:49.088137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.088144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.943 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:41.943 [2024-07-25 07:36:49.088623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.943 [2024-07-25 07:36:49.088632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.943 qpair failed and we were unable to recover it. 00:30:41.943 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.943 [2024-07-25 07:36:49.089058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.089065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.944 [2024-07-25 07:36:49.089280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.089287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.089725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.089731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.090155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.090161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.090462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.090468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.090896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.090903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.091340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.091347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.091785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.091791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.092038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.092045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.092474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.092481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.092906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.092912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.093128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.093134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.093566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.093573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.093996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.094002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.094532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.094560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.094938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.094946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.095559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.095587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.095833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.095842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.096363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.096370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.096575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.096585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.096815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.096821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.097341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.097347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.097780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.097787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.098003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.098009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.098122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.098130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.098590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.098600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.099022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.099029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.099472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.099479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.099923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.099930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.944 [2024-07-25 07:36:49.100352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.100360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.944 [2024-07-25 07:36:49.100872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.100879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.944 [2024-07-25 07:36:49.101126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.101133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.101470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.101478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.101672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.101682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.102144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.102151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.944 qpair failed and we were unable to recover it. 00:30:41.944 [2024-07-25 07:36:49.102594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.944 [2024-07-25 07:36:49.102601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.103080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.103086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.103504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.103510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.103933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.103940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.104366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.104372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.104614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.104621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.104824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.104832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.105284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.105290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.105717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.105724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.106026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.106034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.106340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.106347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.106710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.106716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.107144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.945 [2024-07-25 07:36:49.107151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9d98000b90 with addr=10.0.0.2, port=4420 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.107418] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.945 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.945 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.945 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.945 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:41.945 [2024-07-25 07:36:49.118009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.118111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.118126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.118132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.118137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.118152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.945 07:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 286174 00:30:41.945 [2024-07-25 07:36:49.127956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.128043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.128056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.128062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.128066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.128079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.137910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.137994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.138007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.138012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.138016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.138028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.147972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.148054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.148067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.148072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.148076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.148088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.158000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.158089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.158102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.158107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.158111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.158123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.167948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.168026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.168038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.168044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.168048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.168059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.178021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.178100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.178113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.178118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.178123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.178134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.945 [2024-07-25 07:36:49.188173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.945 [2024-07-25 07:36:49.188259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.945 [2024-07-25 07:36:49.188272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.945 [2024-07-25 07:36:49.188278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.945 [2024-07-25 07:36:49.188283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.945 [2024-07-25 07:36:49.188294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.945 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.198049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.198134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.198147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.198155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.198159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.198170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.208082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.208160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.208172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.208177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.208182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.208192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.218124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.218204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.218216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.218222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.218226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.218237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.228042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.228124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.228136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.228141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.228146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.228157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.238229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.238313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.238325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.238331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.238335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.238347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.248239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.248316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.248328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.248334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.248338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.248349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.258274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.258400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.258413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.258419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.258423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.258434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.268326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.268402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.268414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.268420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.268424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.268435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.278326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.278408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.278421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.278426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.278431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.278442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.288336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.288414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.288426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.288434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.288438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.288450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:41.946 [2024-07-25 07:36:49.298344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.946 [2024-07-25 07:36:49.298460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.946 [2024-07-25 07:36:49.298473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.946 [2024-07-25 07:36:49.298478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.946 [2024-07-25 07:36:49.298483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:41.946 [2024-07-25 07:36:49.298494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:41.946 qpair failed and we were unable to recover it. 00:30:42.209 [2024-07-25 07:36:49.308352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.209 [2024-07-25 07:36:49.308431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.209 [2024-07-25 07:36:49.308443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.209 [2024-07-25 07:36:49.308448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.209 [2024-07-25 07:36:49.308453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.209 [2024-07-25 07:36:49.308464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.209 qpair failed and we were unable to recover it. 00:30:42.209 [2024-07-25 07:36:49.318324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.209 [2024-07-25 07:36:49.318404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.209 [2024-07-25 07:36:49.318417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.209 [2024-07-25 07:36:49.318422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.209 [2024-07-25 07:36:49.318427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.209 [2024-07-25 07:36:49.318438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.209 qpair failed and we were unable to recover it. 00:30:42.209 [2024-07-25 07:36:49.328448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.209 [2024-07-25 07:36:49.328523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.209 [2024-07-25 07:36:49.328535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.209 [2024-07-25 07:36:49.328540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.209 [2024-07-25 07:36:49.328545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.209 [2024-07-25 07:36:49.328556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.209 qpair failed and we were unable to recover it. 00:30:42.209 [2024-07-25 07:36:49.338456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.209 [2024-07-25 07:36:49.338533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.209 [2024-07-25 07:36:49.338545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.209 [2024-07-25 07:36:49.338550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.209 [2024-07-25 07:36:49.338554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.209 [2024-07-25 07:36:49.338566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.209 qpair failed and we were unable to recover it. 00:30:42.209 [2024-07-25 07:36:49.348481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.209 [2024-07-25 07:36:49.348567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.209 [2024-07-25 07:36:49.348580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.209 [2024-07-25 07:36:49.348585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.209 [2024-07-25 07:36:49.348589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.209 [2024-07-25 07:36:49.348600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.209 qpair failed and we were unable to recover it. 00:30:42.209 [2024-07-25 07:36:49.358526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.209 [2024-07-25 07:36:49.358616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.209 [2024-07-25 07:36:49.358628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.358633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.358638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.358649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.368589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.368709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.368721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.368727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.368731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.368742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.378584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.378659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.378675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.378680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.378684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.378695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.388687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.388771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.388784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.388789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.388793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.388804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.398703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.398789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.398808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.398815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.398819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.398835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.408732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.408837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.408857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.408863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.408868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.408884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.418709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.418791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.418810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.418817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.418821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.418840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.428718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.428801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.428820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.428827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.428831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.428846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.438706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.438788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.438802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.438808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.438812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.438825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.448780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.448858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.448872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.448878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.448882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.448895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.458660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.458739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.458752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.458757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.458762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.458774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.468832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.468915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.468931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.468937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.468941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.468952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.478898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.479013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.479032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.479039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.479044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.210 [2024-07-25 07:36:49.479059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.210 qpair failed and we were unable to recover it. 00:30:42.210 [2024-07-25 07:36:49.488772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.210 [2024-07-25 07:36:49.488855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.210 [2024-07-25 07:36:49.488874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.210 [2024-07-25 07:36:49.488881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.210 [2024-07-25 07:36:49.488886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.488901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.498993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.499249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.499269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.499275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.499280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.499295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.508919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.509000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.509015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.509021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.509029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.509041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.518949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.519039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.519058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.519065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.519069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.519085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.528985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.529063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.529077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.529082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.529087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.529099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.538886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.538962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.538975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.538981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.538986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.538998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.548963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.549041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.549054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.549060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.549064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.549076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.559040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.559129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.559142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.559147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.559152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.559163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.211 [2024-07-25 07:36:49.569005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.211 [2024-07-25 07:36:49.569103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.211 [2024-07-25 07:36:49.569115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.211 [2024-07-25 07:36:49.569121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.211 [2024-07-25 07:36:49.569125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.211 [2024-07-25 07:36:49.569136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.211 qpair failed and we were unable to recover it. 00:30:42.474 [2024-07-25 07:36:49.579088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.474 [2024-07-25 07:36:49.579167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.474 [2024-07-25 07:36:49.579179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.474 [2024-07-25 07:36:49.579185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.474 [2024-07-25 07:36:49.579189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.474 [2024-07-25 07:36:49.579204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.474 qpair failed and we were unable to recover it. 00:30:42.474 [2024-07-25 07:36:49.589154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.474 [2024-07-25 07:36:49.589234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.474 [2024-07-25 07:36:49.589246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.474 [2024-07-25 07:36:49.589252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.474 [2024-07-25 07:36:49.589256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.474 [2024-07-25 07:36:49.589268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.474 qpair failed and we were unable to recover it. 00:30:42.474 [2024-07-25 07:36:49.599041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.474 [2024-07-25 07:36:49.599139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.474 [2024-07-25 07:36:49.599152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.474 [2024-07-25 07:36:49.599158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.474 [2024-07-25 07:36:49.599166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.474 [2024-07-25 07:36:49.599177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.474 qpair failed and we were unable to recover it. 00:30:42.474 [2024-07-25 07:36:49.609238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.474 [2024-07-25 07:36:49.609349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.474 [2024-07-25 07:36:49.609362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.474 [2024-07-25 07:36:49.609367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.474 [2024-07-25 07:36:49.609371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.474 [2024-07-25 07:36:49.609383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.474 qpair failed and we were unable to recover it. 00:30:42.474 [2024-07-25 07:36:49.619209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.474 [2024-07-25 07:36:49.619283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.474 [2024-07-25 07:36:49.619296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.619301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.619305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.619317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.629246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.629328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.629341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.629346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.629350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.629362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.639255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.639335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.639347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.639352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.639357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.639368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.649297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.649374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.649387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.649392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.649396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.649408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.659283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.659365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.659377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.659382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.659387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.659398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.669353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.669434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.669447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.669452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.669457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.669469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.679330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.679411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.679423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.679429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.679433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.679445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.689430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.689509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.689521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.689530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.689534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.689546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.699431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.699507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.699520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.699525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.699529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.699540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.709470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.709547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.709559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.709564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.709569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.709580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.719525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.719606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.719618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.719623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.719627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.719639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.729461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.729538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.729551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.729557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.729561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.729572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.739530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.739606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.739619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.739624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.475 [2024-07-25 07:36:49.739629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.475 [2024-07-25 07:36:49.739639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.475 qpair failed and we were unable to recover it. 00:30:42.475 [2024-07-25 07:36:49.749622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.475 [2024-07-25 07:36:49.749743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.475 [2024-07-25 07:36:49.749755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.475 [2024-07-25 07:36:49.749761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.749765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.749777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.759615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.759700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.759712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.759718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.759722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.759734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.769633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.769709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.769721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.769726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.769731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.769742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.779644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.779729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.779751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.779758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.779763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.779778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.789717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.789799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.789812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.789818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.789822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.789834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.799741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.799827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.799846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.799853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.799858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.799873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.809749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.809860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.809880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.809887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.809891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.809907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.819803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.819891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.819911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.819918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.819922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.819942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.829743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.829825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.829845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.829852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.829857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.829871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.476 [2024-07-25 07:36:49.839821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.476 [2024-07-25 07:36:49.839906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.476 [2024-07-25 07:36:49.839926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.476 [2024-07-25 07:36:49.839932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.476 [2024-07-25 07:36:49.839937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.476 [2024-07-25 07:36:49.839952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.476 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.849863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.849941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.849955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.849960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.849965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.849977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.859868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.859944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.859957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.859962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.859967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.859979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.869924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.870007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.870027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.870032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.870037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.870048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.879953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.880042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.880061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.880068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.880072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.880088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.889955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.890030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.890045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.890052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.890057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.890070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.899965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.900043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.900056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.900061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.900066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.900077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.910016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.910095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.910108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.910113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.910121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.910133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.920031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.920113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.920127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.920133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.920137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.920149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.930105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.930178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.930191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.930196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.930205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.930217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.940212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.940292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.940304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.940310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.940314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.940325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.950140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.950224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.950237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.950242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.950246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.950258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.960158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.960263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.960276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.960282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.960286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.960298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.739 qpair failed and we were unable to recover it. 00:30:42.739 [2024-07-25 07:36:49.970207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.739 [2024-07-25 07:36:49.970282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.739 [2024-07-25 07:36:49.970295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.739 [2024-07-25 07:36:49.970300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.739 [2024-07-25 07:36:49.970305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.739 [2024-07-25 07:36:49.970316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:49.980214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:49.980302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:49.980315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:49.980321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:49.980325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:49.980336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:49.990258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:49.990335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:49.990347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:49.990353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:49.990357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:49.990368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.000317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.000449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.000461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.000467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.000474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.000486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.010316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.010439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.010453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.010459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.010464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.010475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.020396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.020605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.020619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.020624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.020629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.020641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.030386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.030462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.030475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.030481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.030485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.030497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.040322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.040419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.040432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.040438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.040442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.040454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.050347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.050449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.050463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.050469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.050473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.050486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.060458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.060532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.060545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.060551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.060555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.060566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.070492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.070572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.070586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.070592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.070596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.070608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.080412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.080495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.080507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.080513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.080517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.080528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.090548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.090624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.090636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.090644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.090649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.090660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:42.740 [2024-07-25 07:36:50.100557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.740 [2024-07-25 07:36:50.100633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.740 [2024-07-25 07:36:50.100646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.740 [2024-07-25 07:36:50.100651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.740 [2024-07-25 07:36:50.100656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:42.740 [2024-07-25 07:36:50.100668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:42.740 qpair failed and we were unable to recover it. 00:30:43.003 [2024-07-25 07:36:50.110582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.003 [2024-07-25 07:36:50.110663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.003 [2024-07-25 07:36:50.110675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.003 [2024-07-25 07:36:50.110681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.003 [2024-07-25 07:36:50.110685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.003 [2024-07-25 07:36:50.110697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.003 qpair failed and we were unable to recover it. 00:30:43.003 [2024-07-25 07:36:50.120625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.003 [2024-07-25 07:36:50.120708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.003 [2024-07-25 07:36:50.120721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.003 [2024-07-25 07:36:50.120727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.003 [2024-07-25 07:36:50.120731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.003 [2024-07-25 07:36:50.120743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.003 qpair failed and we were unable to recover it. 00:30:43.003 [2024-07-25 07:36:50.130649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.003 [2024-07-25 07:36:50.130723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.003 [2024-07-25 07:36:50.130736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.003 [2024-07-25 07:36:50.130741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.003 [2024-07-25 07:36:50.130746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.003 [2024-07-25 07:36:50.130757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.003 qpair failed and we were unable to recover it. 00:30:43.003 [2024-07-25 07:36:50.140640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.003 [2024-07-25 07:36:50.140715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.003 [2024-07-25 07:36:50.140727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.003 [2024-07-25 07:36:50.140732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.003 [2024-07-25 07:36:50.140737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.003 [2024-07-25 07:36:50.140748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.003 qpair failed and we were unable to recover it. 00:30:43.003 [2024-07-25 07:36:50.150728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.003 [2024-07-25 07:36:50.150807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.003 [2024-07-25 07:36:50.150820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.003 [2024-07-25 07:36:50.150825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.003 [2024-07-25 07:36:50.150829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.003 [2024-07-25 07:36:50.150840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.003 qpair failed and we were unable to recover it. 00:30:43.003 [2024-07-25 07:36:50.160623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.003 [2024-07-25 07:36:50.160713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.003 [2024-07-25 07:36:50.160732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.160739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.160744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.160759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.170758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.170838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.170853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.170858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.170863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.170875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.180757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.180836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.180859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.180865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.180870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.180885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.190853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.190962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.190976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.190982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.190986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.190999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.200804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.200906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.200926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.200932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.200937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.200952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.210843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.210920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.210939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.210946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.210951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.210966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.220865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.220946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.220966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.220972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.220977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.220996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.230931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.231045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.231060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.231066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.231070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.231083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.241095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.241186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.241209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.241216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.241221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.241236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.250959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.251037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.251051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.251056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.251060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.251072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.260975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.261055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.261068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.261073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.261078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.261089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.271002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.271079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.271095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.271101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.271105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.271117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.280942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.281023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.281036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.281041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.281045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.004 [2024-07-25 07:36:50.281057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.004 qpair failed and we were unable to recover it. 00:30:43.004 [2024-07-25 07:36:50.291088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.004 [2024-07-25 07:36:50.291166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.004 [2024-07-25 07:36:50.291179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.004 [2024-07-25 07:36:50.291184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.004 [2024-07-25 07:36:50.291188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.291204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.005 [2024-07-25 07:36:50.301124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.005 [2024-07-25 07:36:50.301205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.005 [2024-07-25 07:36:50.301218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.005 [2024-07-25 07:36:50.301223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.005 [2024-07-25 07:36:50.301227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.301238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.005 [2024-07-25 07:36:50.311164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.005 [2024-07-25 07:36:50.311243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.005 [2024-07-25 07:36:50.311255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.005 [2024-07-25 07:36:50.311260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.005 [2024-07-25 07:36:50.311265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.311279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.005 [2024-07-25 07:36:50.321058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.005 [2024-07-25 07:36:50.321142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.005 [2024-07-25 07:36:50.321155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.005 [2024-07-25 07:36:50.321160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.005 [2024-07-25 07:36:50.321165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.321177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.005 [2024-07-25 07:36:50.331183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.005 [2024-07-25 07:36:50.331260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.005 [2024-07-25 07:36:50.331273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.005 [2024-07-25 07:36:50.331278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.005 [2024-07-25 07:36:50.331283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.331294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.005 [2024-07-25 07:36:50.341240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.005 [2024-07-25 07:36:50.341318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.005 [2024-07-25 07:36:50.341330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.005 [2024-07-25 07:36:50.341336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.005 [2024-07-25 07:36:50.341340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.341351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.005 [2024-07-25 07:36:50.351258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.005 [2024-07-25 07:36:50.351338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.005 [2024-07-25 07:36:50.351351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.005 [2024-07-25 07:36:50.351356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.005 [2024-07-25 07:36:50.351360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.351372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.005 [2024-07-25 07:36:50.361407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.005 [2024-07-25 07:36:50.361510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.005 [2024-07-25 07:36:50.361522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.005 [2024-07-25 07:36:50.361528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.005 [2024-07-25 07:36:50.361532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.005 [2024-07-25 07:36:50.361543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.005 qpair failed and we were unable to recover it. 00:30:43.267 [2024-07-25 07:36:50.371339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.371444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.371457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.371462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.371467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.371478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.381496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.381575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.381588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.381593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.381597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.381608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.391401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.391482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.391494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.391500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.391504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.391515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.401287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.401367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.401380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.401385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.401392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.401404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.411464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.411549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.411561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.411567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.411571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.411582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.421458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.421538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.421551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.421556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.421560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.421571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.431488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.431568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.431580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.431586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.431590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.431601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.441526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.441637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.441649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.441654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.441659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.441670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.451552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.451638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.451651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.451656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.451660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.451671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.461577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.461646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.461659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.461664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.461668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.461679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.471623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.471704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.471717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.471722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.471726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.471737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.481649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.481728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.481740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.481745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.481749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.268 [2024-07-25 07:36:50.481761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.268 qpair failed and we were unable to recover it. 00:30:43.268 [2024-07-25 07:36:50.491627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.268 [2024-07-25 07:36:50.491704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.268 [2024-07-25 07:36:50.491717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.268 [2024-07-25 07:36:50.491725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.268 [2024-07-25 07:36:50.491729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.491740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.501628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.501710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.501729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.501736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.501740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.501755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.511731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.511809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.511823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.511828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.511833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.511845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.521763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.521895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.521915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.521921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.521926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.521941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.531766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.531846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.531865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.531872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.531876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.531891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.541774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.541852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.541872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.541878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.541883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.541898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.551747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.551857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.551876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.551883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.551888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.551903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.561853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.561937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.561950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.561956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.561960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.561972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.571977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.572059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.572078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.572085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.572090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.572105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.581898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.581989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.582002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.582015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.582019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.582032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.591912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.591990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.592002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.592008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.592012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.592024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.601962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.602046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.602066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.602072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.602077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.602092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.611958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.612035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.612048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.612053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.612058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.612070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.269 [2024-07-25 07:36:50.622021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.269 [2024-07-25 07:36:50.622098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.269 [2024-07-25 07:36:50.622111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.269 [2024-07-25 07:36:50.622117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.269 [2024-07-25 07:36:50.622121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.269 [2024-07-25 07:36:50.622133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.269 qpair failed and we were unable to recover it. 00:30:43.270 [2024-07-25 07:36:50.632027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.270 [2024-07-25 07:36:50.632104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.270 [2024-07-25 07:36:50.632117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.270 [2024-07-25 07:36:50.632122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.270 [2024-07-25 07:36:50.632126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.270 [2024-07-25 07:36:50.632138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.270 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.642085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.532 [2024-07-25 07:36:50.642168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.532 [2024-07-25 07:36:50.642181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.532 [2024-07-25 07:36:50.642187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.532 [2024-07-25 07:36:50.642191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.532 [2024-07-25 07:36:50.642207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.532 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.652109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.532 [2024-07-25 07:36:50.652187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.532 [2024-07-25 07:36:50.652203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.532 [2024-07-25 07:36:50.652209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.532 [2024-07-25 07:36:50.652213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.532 [2024-07-25 07:36:50.652225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.532 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.662021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.532 [2024-07-25 07:36:50.662125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.532 [2024-07-25 07:36:50.662138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.532 [2024-07-25 07:36:50.662143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.532 [2024-07-25 07:36:50.662148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.532 [2024-07-25 07:36:50.662159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.532 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.672037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.532 [2024-07-25 07:36:50.672135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.532 [2024-07-25 07:36:50.672150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.532 [2024-07-25 07:36:50.672156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.532 [2024-07-25 07:36:50.672160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.532 [2024-07-25 07:36:50.672172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.532 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.682222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.532 [2024-07-25 07:36:50.682303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.532 [2024-07-25 07:36:50.682316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.532 [2024-07-25 07:36:50.682321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.532 [2024-07-25 07:36:50.682326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.532 [2024-07-25 07:36:50.682337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.532 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.692187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.532 [2024-07-25 07:36:50.692268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.532 [2024-07-25 07:36:50.692281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.532 [2024-07-25 07:36:50.692286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.532 [2024-07-25 07:36:50.692290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.532 [2024-07-25 07:36:50.692301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.532 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.702274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.532 [2024-07-25 07:36:50.702363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.532 [2024-07-25 07:36:50.702376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.532 [2024-07-25 07:36:50.702381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.532 [2024-07-25 07:36:50.702385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.532 [2024-07-25 07:36:50.702397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.532 qpair failed and we were unable to recover it. 00:30:43.532 [2024-07-25 07:36:50.712267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.712346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.712359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.712364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.712368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.712382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.722179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.722265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.722278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.722283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.722287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.722299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.732332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.732409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.732422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.732427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.732432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.732443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.742361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.742436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.742448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.742454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.742458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.742469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.752425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.752546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.752559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.752565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.752569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.752580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.762288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.762370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.762386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.762391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.762395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.762407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.772444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.772537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.772550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.772555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.772559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.772571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.782468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.782550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.782562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.782568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.782572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.782583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.792466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.792545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.792557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.792563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.792567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.792578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.802562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.802691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.802703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.802708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.802716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.802727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.812565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.812642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.812655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.812660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.812664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.812675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.822573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.822655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.822667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.822673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.822677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.822688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.832655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.832762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.832776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.533 [2024-07-25 07:36:50.832781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.533 [2024-07-25 07:36:50.832785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.533 [2024-07-25 07:36:50.832797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.533 qpair failed and we were unable to recover it. 00:30:43.533 [2024-07-25 07:36:50.842612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.533 [2024-07-25 07:36:50.842697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.533 [2024-07-25 07:36:50.842711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.534 [2024-07-25 07:36:50.842716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.534 [2024-07-25 07:36:50.842720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.534 [2024-07-25 07:36:50.842731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.534 qpair failed and we were unable to recover it. 00:30:43.534 [2024-07-25 07:36:50.852625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.534 [2024-07-25 07:36:50.852706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.534 [2024-07-25 07:36:50.852719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.534 [2024-07-25 07:36:50.852724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.534 [2024-07-25 07:36:50.852729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.534 [2024-07-25 07:36:50.852740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.534 qpair failed and we were unable to recover it. 00:30:43.534 [2024-07-25 07:36:50.862653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.534 [2024-07-25 07:36:50.862730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.534 [2024-07-25 07:36:50.862743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.534 [2024-07-25 07:36:50.862748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.534 [2024-07-25 07:36:50.862752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.534 [2024-07-25 07:36:50.862763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.534 qpair failed and we were unable to recover it. 00:30:43.534 [2024-07-25 07:36:50.872722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.534 [2024-07-25 07:36:50.872800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.534 [2024-07-25 07:36:50.872813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.534 [2024-07-25 07:36:50.872818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.534 [2024-07-25 07:36:50.872822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.534 [2024-07-25 07:36:50.872833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.534 qpair failed and we were unable to recover it. 00:30:43.534 [2024-07-25 07:36:50.882774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.534 [2024-07-25 07:36:50.882883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.534 [2024-07-25 07:36:50.882895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.534 [2024-07-25 07:36:50.882900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.534 [2024-07-25 07:36:50.882905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.534 [2024-07-25 07:36:50.882916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.534 qpair failed and we were unable to recover it. 00:30:43.534 [2024-07-25 07:36:50.892753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.534 [2024-07-25 07:36:50.892831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.534 [2024-07-25 07:36:50.892850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.534 [2024-07-25 07:36:50.892860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.534 [2024-07-25 07:36:50.892865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.534 [2024-07-25 07:36:50.892880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.534 qpair failed and we were unable to recover it. 00:30:43.796 [2024-07-25 07:36:50.902815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.796 [2024-07-25 07:36:50.902898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.796 [2024-07-25 07:36:50.902918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.796 [2024-07-25 07:36:50.902925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.796 [2024-07-25 07:36:50.902929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.796 [2024-07-25 07:36:50.902945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.796 qpair failed and we were unable to recover it. 00:30:43.796 [2024-07-25 07:36:50.912843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.796 [2024-07-25 07:36:50.912926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.796 [2024-07-25 07:36:50.912945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.796 [2024-07-25 07:36:50.912952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.796 [2024-07-25 07:36:50.912957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.796 [2024-07-25 07:36:50.912972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.796 qpair failed and we were unable to recover it. 00:30:43.796 [2024-07-25 07:36:50.922751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.922846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.922860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.922865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.922870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.922882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:50.932888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.932962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.932974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.932980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.932984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.932996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:50.942917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.943000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.943019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.943026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.943031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.943046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:50.952944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.953027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.953047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.953053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.953058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.953073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:50.962943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.963027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.963040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.963046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.963050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.963062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:50.973003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.973111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.973124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.973129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.973134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.973146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:50.983001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.983078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.983090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.983099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.983104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.983115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:50.993006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:50.993114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:50.993127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:50.993132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:50.993136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:50.993148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:51.003088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:51.003180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:51.003192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:51.003197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:51.003205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:51.003217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:51.013115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:51.013190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:51.013205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:51.013211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:51.013216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:51.013227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:51.023151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:51.023229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:51.023241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:51.023247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:51.023251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:51.023262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:51.033219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:51.033296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:51.033308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:51.033314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:51.033318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:51.033329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.797 [2024-07-25 07:36:51.043194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.797 [2024-07-25 07:36:51.043277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.797 [2024-07-25 07:36:51.043289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.797 [2024-07-25 07:36:51.043294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.797 [2024-07-25 07:36:51.043299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.797 [2024-07-25 07:36:51.043310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.797 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.053235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.053310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.053322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.053327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.053332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.053343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.063234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.063321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.063334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.063340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.063344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.063355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.073306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.073398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.073414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.073419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.073423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.073435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.083294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.083376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.083389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.083394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.083398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.083410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.093338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.093455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.093468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.093474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.093478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.093490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.103272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.103365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.103378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.103383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.103388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.103402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.113402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.113489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.113502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.113507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.113511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.113526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.123444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.123528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.123541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.123546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.123550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.123562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.133448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.133527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.133540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.133545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.133550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.133561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.143602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.143679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.143691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.143696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.143700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.143712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:43.798 [2024-07-25 07:36:51.153525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.798 [2024-07-25 07:36:51.153605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.798 [2024-07-25 07:36:51.153617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.798 [2024-07-25 07:36:51.153622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.798 [2024-07-25 07:36:51.153626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:43.798 [2024-07-25 07:36:51.153637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:43.798 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.163418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.163498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.163514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.163519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.163524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.163535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.173623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.173732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.173745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.173750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.173754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.173766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.183552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.183630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.183642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.183648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.183652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.183663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.193596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.193677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.193689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.193695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.193699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.193710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.203614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.203696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.203708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.203713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.203720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.203732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.213652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.213728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.213741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.213746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.213750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.213761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.223578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.223651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.223664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.223670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.223674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.223686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.233739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.233823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.061 [2024-07-25 07:36:51.233842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.061 [2024-07-25 07:36:51.233849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.061 [2024-07-25 07:36:51.233853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.061 [2024-07-25 07:36:51.233869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.061 qpair failed and we were unable to recover it. 00:30:44.061 [2024-07-25 07:36:51.243766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.061 [2024-07-25 07:36:51.243855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.243868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.243874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.243878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.243890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.253801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.253888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.253908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.253914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.253919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.253934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.263818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.263896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.263909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.263915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.263919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.263931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.273777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.273860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.273879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.273886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.273890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.273906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.283891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.283989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.284008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.284014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.284019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.284034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.293903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.294019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.294039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.294045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.294053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.294068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.303927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.304010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.304030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.304037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.304042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.304057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.313949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.314054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.314068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.314073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.314078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.314090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.324000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.324084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.324097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.324102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.324107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.324118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.334020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.334093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.334106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.334111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.334116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.334127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.344024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.344101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.344114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.344119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.344123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.344134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.354039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.354115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.354127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.354133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.354137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.354148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.363997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.364079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.364092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.364097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.062 [2024-07-25 07:36:51.364101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.062 [2024-07-25 07:36:51.364112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.062 qpair failed and we were unable to recover it. 00:30:44.062 [2024-07-25 07:36:51.374093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.062 [2024-07-25 07:36:51.374168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.062 [2024-07-25 07:36:51.374181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.062 [2024-07-25 07:36:51.374187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.063 [2024-07-25 07:36:51.374191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.063 [2024-07-25 07:36:51.374206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.063 qpair failed and we were unable to recover it. 00:30:44.063 [2024-07-25 07:36:51.384153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.063 [2024-07-25 07:36:51.384260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.063 [2024-07-25 07:36:51.384272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.063 [2024-07-25 07:36:51.384281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.063 [2024-07-25 07:36:51.384285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.063 [2024-07-25 07:36:51.384297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.063 qpair failed and we were unable to recover it. 00:30:44.063 [2024-07-25 07:36:51.394244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.063 [2024-07-25 07:36:51.394345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.063 [2024-07-25 07:36:51.394357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.063 [2024-07-25 07:36:51.394363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.063 [2024-07-25 07:36:51.394367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.063 [2024-07-25 07:36:51.394378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.063 qpair failed and we were unable to recover it. 00:30:44.063 [2024-07-25 07:36:51.404399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.063 [2024-07-25 07:36:51.404483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.063 [2024-07-25 07:36:51.404495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.063 [2024-07-25 07:36:51.404500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.063 [2024-07-25 07:36:51.404505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.063 [2024-07-25 07:36:51.404516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.063 qpair failed and we were unable to recover it. 00:30:44.063 [2024-07-25 07:36:51.414278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.063 [2024-07-25 07:36:51.414402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.063 [2024-07-25 07:36:51.414414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.063 [2024-07-25 07:36:51.414419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.063 [2024-07-25 07:36:51.414424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.063 [2024-07-25 07:36:51.414435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.063 qpair failed and we were unable to recover it. 00:30:44.063 [2024-07-25 07:36:51.424318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.063 [2024-07-25 07:36:51.424394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.063 [2024-07-25 07:36:51.424406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.063 [2024-07-25 07:36:51.424412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.063 [2024-07-25 07:36:51.424416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.063 [2024-07-25 07:36:51.424428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.063 qpair failed and we were unable to recover it. 00:30:44.326 [2024-07-25 07:36:51.434290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.326 [2024-07-25 07:36:51.434367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.326 [2024-07-25 07:36:51.434379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.326 [2024-07-25 07:36:51.434385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.326 [2024-07-25 07:36:51.434389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.326 [2024-07-25 07:36:51.434400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.326 qpair failed and we were unable to recover it. 00:30:44.326 [2024-07-25 07:36:51.444281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.326 [2024-07-25 07:36:51.444363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.326 [2024-07-25 07:36:51.444376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.326 [2024-07-25 07:36:51.444381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.326 [2024-07-25 07:36:51.444385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.326 [2024-07-25 07:36:51.444396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.326 qpair failed and we were unable to recover it. 00:30:44.326 [2024-07-25 07:36:51.454344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.326 [2024-07-25 07:36:51.454422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.326 [2024-07-25 07:36:51.454435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.326 [2024-07-25 07:36:51.454441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.454445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.454456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.464342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.464460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.464473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.464478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.464482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.464494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.474340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.474430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.474445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.474450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.474455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.474466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.484304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.484392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.484404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.484410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.484414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.484425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.494325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.494406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.494419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.494424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.494428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.494439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.504413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.504481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.504493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.504499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.504503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.504514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.514493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.514574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.514586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.514592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.514596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.514610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.524521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.524600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.524613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.524618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.524623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.524634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.534547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.534620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.534632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.534638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.534642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.534653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.544564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.544650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.544663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.544668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.544672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.544683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.554631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.554707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.554720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.554725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.554729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.554741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.564656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.564745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.564761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.564766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.564771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.564782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.574641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.574717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.574731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.574736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.327 [2024-07-25 07:36:51.574740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.327 [2024-07-25 07:36:51.574752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.327 qpair failed and we were unable to recover it. 00:30:44.327 [2024-07-25 07:36:51.584568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.327 [2024-07-25 07:36:51.584641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.327 [2024-07-25 07:36:51.584653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.327 [2024-07-25 07:36:51.584659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.584663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.584675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.594670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.594746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.594758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.594763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.594767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.594779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.604755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.604841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.604861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.604867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.604875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.604890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.614785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.614871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.614890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.614897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.614902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.614917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.624762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.624868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.624882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.624887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.624892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.624904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.634817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.634897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.634910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.634916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.634920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.634931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.644883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.644963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.644977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.644982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.644986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.644997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.654911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.654995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.655014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.655021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.655026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.655041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.664849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.664930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.664949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.664956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.664961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.664975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.674823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.674905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.674924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.674931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.674935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.674950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.328 [2024-07-25 07:36:51.684995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.328 [2024-07-25 07:36:51.685079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.328 [2024-07-25 07:36:51.685093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.328 [2024-07-25 07:36:51.685098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.328 [2024-07-25 07:36:51.685102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.328 [2024-07-25 07:36:51.685114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.328 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.695019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.695094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.695107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.695112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.695121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.695132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.704973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.705044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.705056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.705062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.705066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.705077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.715042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.715117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.715130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.715135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.715140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.715151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.725078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.725160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.725173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.725178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.725183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.725194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.735092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.735294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.735306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.735312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.735316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.735327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.745069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.745140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.745153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.745159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.745163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.745174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.755148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.755235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.755248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.755253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.755257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.755269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.765185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.765267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.765279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.765285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.765289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.765301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.592 qpair failed and we were unable to recover it. 00:30:44.592 [2024-07-25 07:36:51.775172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.592 [2024-07-25 07:36:51.775251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.592 [2024-07-25 07:36:51.775264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.592 [2024-07-25 07:36:51.775269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.592 [2024-07-25 07:36:51.775274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.592 [2024-07-25 07:36:51.775285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.785162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.785238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.785250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.785259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.785263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.785275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.795267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.795348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.795361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.795366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.795370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.795382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.805305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.805385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.805398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.805404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.805408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.805420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.815153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.815227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.815240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.815245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.815249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.815261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.825186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.825260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.825273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.825278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.825282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.825294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.835401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.835477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.835489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.835494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.835499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.835510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.845395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.845481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.845494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.845499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.845503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.845514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.855405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.855524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.855537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.855543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.855547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.855559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.865385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.865460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.865472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.865478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.865482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.865493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.875460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.875543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.875558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.875564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.875568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.875580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.885389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.885468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.885481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.885487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.885492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.885503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.895545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.895662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.895675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.895680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.895684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.895696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.905498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.593 [2024-07-25 07:36:51.905570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.593 [2024-07-25 07:36:51.905583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.593 [2024-07-25 07:36:51.905588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.593 [2024-07-25 07:36:51.905592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.593 [2024-07-25 07:36:51.905603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.593 qpair failed and we were unable to recover it. 00:30:44.593 [2024-07-25 07:36:51.915564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.594 [2024-07-25 07:36:51.915638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.594 [2024-07-25 07:36:51.915650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.594 [2024-07-25 07:36:51.915656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.594 [2024-07-25 07:36:51.915660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.594 [2024-07-25 07:36:51.915675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.594 qpair failed and we were unable to recover it. 00:30:44.594 [2024-07-25 07:36:51.925586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.594 [2024-07-25 07:36:51.925667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.594 [2024-07-25 07:36:51.925679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.594 [2024-07-25 07:36:51.925684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.594 [2024-07-25 07:36:51.925689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.594 [2024-07-25 07:36:51.925700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.594 qpair failed and we were unable to recover it. 00:30:44.594 [2024-07-25 07:36:51.935610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.594 [2024-07-25 07:36:51.935681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.594 [2024-07-25 07:36:51.935694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.594 [2024-07-25 07:36:51.935699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.594 [2024-07-25 07:36:51.935704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.594 [2024-07-25 07:36:51.935715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.594 qpair failed and we were unable to recover it. 00:30:44.594 [2024-07-25 07:36:51.945633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.594 [2024-07-25 07:36:51.945701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.594 [2024-07-25 07:36:51.945713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.594 [2024-07-25 07:36:51.945719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.594 [2024-07-25 07:36:51.945723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.594 [2024-07-25 07:36:51.945734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.594 qpair failed and we were unable to recover it. 00:30:44.594 [2024-07-25 07:36:51.955704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.594 [2024-07-25 07:36:51.955781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.594 [2024-07-25 07:36:51.955794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.594 [2024-07-25 07:36:51.955799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.594 [2024-07-25 07:36:51.955803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.594 [2024-07-25 07:36:51.955815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.594 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:51.965614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:51.965695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:51.965710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:51.965716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:51.965720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:51.965731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:51.975737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:51.975821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:51.975833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:51.975839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:51.975843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:51.975854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:51.985743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:51.985857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:51.985870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:51.985876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:51.985880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:51.985892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:51.995854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:51.995937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:51.995957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:51.995963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:51.995968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:51.995983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.005861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.005955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.005974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.005981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.005985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:52.006004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.015861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.015937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.015956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.015963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.015967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:52.015983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.025876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.025955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.025974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.025981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.025986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:52.026001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.035897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.035980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.035999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.036006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.036010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:52.036025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.045918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.046010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.046024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.046029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.046033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:52.046046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.055915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.055991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.056002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.056007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.056011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:52.056021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.065951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.066022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.066034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.066040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.066044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.860 [2024-07-25 07:36:52.066055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.860 qpair failed and we were unable to recover it. 00:30:44.860 [2024-07-25 07:36:52.076028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.860 [2024-07-25 07:36:52.076104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.860 [2024-07-25 07:36:52.076116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.860 [2024-07-25 07:36:52.076121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.860 [2024-07-25 07:36:52.076126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.076137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.086088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.086169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.086182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.086187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.086191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.086206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.096064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.096138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.096153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.096159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.096169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.096182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.105957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.106027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.106040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.106045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.106049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.106061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.116155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.116234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.116247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.116252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.116256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.116268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.126141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.126221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.126234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.126239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.126244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.126256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.136189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.136308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.136321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.136327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.136331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.136342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.146161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.146265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.146279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.146285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.146289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.146301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.156239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.156316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.156329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.156334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.156338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.156349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.166262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.166340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.166353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.166358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.166363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.166374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.176248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.176323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.176336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.176341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.176345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.176356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.186270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.186343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.186355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.186364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.186368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.186380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.196354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.196433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.196445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.196451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.196455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.196467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.861 [2024-07-25 07:36:52.206330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.861 [2024-07-25 07:36:52.206405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.861 [2024-07-25 07:36:52.206417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.861 [2024-07-25 07:36:52.206423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.861 [2024-07-25 07:36:52.206427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.861 [2024-07-25 07:36:52.206439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.861 qpair failed and we were unable to recover it. 00:30:44.862 [2024-07-25 07:36:52.216365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.862 [2024-07-25 07:36:52.216439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.862 [2024-07-25 07:36:52.216451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.862 [2024-07-25 07:36:52.216457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.862 [2024-07-25 07:36:52.216461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.862 [2024-07-25 07:36:52.216473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.862 qpair failed and we were unable to recover it. 00:30:44.862 [2024-07-25 07:36:52.226295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:44.862 [2024-07-25 07:36:52.226368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:44.862 [2024-07-25 07:36:52.226381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:44.862 [2024-07-25 07:36:52.226386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:44.862 [2024-07-25 07:36:52.226391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:44.862 [2024-07-25 07:36:52.226402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:44.862 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.236451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.236559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.236571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.236576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.236581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.236592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.246452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.246532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.246544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.246549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.246554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.246566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.256456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.256528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.256541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.256546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.256550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.256562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.266473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.266539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.266552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.266557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.266561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.266572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.276582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.276658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.276671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.276679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.276684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.276696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.286496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.286573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.286586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.286591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.286595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.286606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.296557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.296635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.296648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.296653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.296657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.296668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.306559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.306668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.306681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.306686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.306690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.306701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.316651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.316730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.316742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.316747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.316752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.316763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.326634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.326715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.326728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.326733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.326737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.326748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.336552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.336632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.336645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.336650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.336654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.336665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.346716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.346804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.346817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.346822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.346826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.346838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.124 [2024-07-25 07:36:52.356646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.124 [2024-07-25 07:36:52.356723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.124 [2024-07-25 07:36:52.356736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.124 [2024-07-25 07:36:52.356741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.124 [2024-07-25 07:36:52.356745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.124 [2024-07-25 07:36:52.356757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.124 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.366750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.366828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.366843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.366848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.366852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.366864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.376774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.376850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.376870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.376876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.376881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.376896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.386785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.386862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.386875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.386880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.386884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.386896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.396872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.396962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.396981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.396987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.396992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.397008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.406845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.406930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.406949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.406955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.406960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.406979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.416914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.416997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.417016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.417023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.417027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.417042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.426897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.426983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.427002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.427008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.427013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.427028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.436994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.437077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.437092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.437097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.437101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.437114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.446945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.447026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.447039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.447044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.447048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.447061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.456971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.457044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.457060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.457066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.457070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.457081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.466993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.467063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.467075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.467081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.467085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.467096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.477115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.477195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.477211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.477216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.477221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.477232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.125 [2024-07-25 07:36:52.487098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.125 [2024-07-25 07:36:52.487205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.125 [2024-07-25 07:36:52.487217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.125 [2024-07-25 07:36:52.487223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.125 [2024-07-25 07:36:52.487227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.125 [2024-07-25 07:36:52.487239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.125 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.497093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.497167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.497179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.497184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.497192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.497207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.507119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.507190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.507207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.507212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.507217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.507228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.517186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.517294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.517306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.517311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.517316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.517327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.527076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.527157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.527170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.527175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.527179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.527191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.537209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.537281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.537293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.537299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.537303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.537314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.547116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.547191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.547207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.547213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.547218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.547229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.557353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.557432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.557445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.557450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.557454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.557466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.567320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.567410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.567423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.567428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.567432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.567444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.577290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.577369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.388 [2024-07-25 07:36:52.577382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.388 [2024-07-25 07:36:52.577387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.388 [2024-07-25 07:36:52.577392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.388 [2024-07-25 07:36:52.577403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.388 qpair failed and we were unable to recover it. 00:30:45.388 [2024-07-25 07:36:52.587339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.388 [2024-07-25 07:36:52.587407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.587420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.587428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.587433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.587444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.597442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.597522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.597534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.597540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.597544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.597555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.607382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.607462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.607474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.607480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.607484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.607495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.617389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.617460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.617472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.617477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.617482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.617493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.627476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.627552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.627564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.627569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.627573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.627584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.637523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.637599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.637612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.637617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.637621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.637632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.647414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.647488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.647501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.647507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.647511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.647522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.657530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.657631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.657644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.657649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.657654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.657665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.667455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.667523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.667536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.667541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.667545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.667556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.677640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.677717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.677729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.677737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.677742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.677753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.687595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.687673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.687685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.687690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.687694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.687706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.697627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.697696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.697708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.697714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.697718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.697729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.707660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.707736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.707748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.707753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.707758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.389 [2024-07-25 07:36:52.707769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.389 qpair failed and we were unable to recover it. 00:30:45.389 [2024-07-25 07:36:52.717738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.389 [2024-07-25 07:36:52.717815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.389 [2024-07-25 07:36:52.717828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.389 [2024-07-25 07:36:52.717833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.389 [2024-07-25 07:36:52.717837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.390 [2024-07-25 07:36:52.717848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.390 qpair failed and we were unable to recover it. 00:30:45.390 [2024-07-25 07:36:52.727739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.390 [2024-07-25 07:36:52.727817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.390 [2024-07-25 07:36:52.727830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.390 [2024-07-25 07:36:52.727835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.390 [2024-07-25 07:36:52.727839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.390 [2024-07-25 07:36:52.727851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.390 qpair failed and we were unable to recover it. 00:30:45.390 [2024-07-25 07:36:52.737783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.390 [2024-07-25 07:36:52.737870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.390 [2024-07-25 07:36:52.737883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.390 [2024-07-25 07:36:52.737888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.390 [2024-07-25 07:36:52.737892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.390 [2024-07-25 07:36:52.737903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.390 qpair failed and we were unable to recover it. 00:30:45.390 [2024-07-25 07:36:52.747780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.390 [2024-07-25 07:36:52.747852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.390 [2024-07-25 07:36:52.747864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.390 [2024-07-25 07:36:52.747870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.390 [2024-07-25 07:36:52.747874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.390 [2024-07-25 07:36:52.747885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.390 qpair failed and we were unable to recover it. 00:30:45.652 [2024-07-25 07:36:52.757827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.652 [2024-07-25 07:36:52.757903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.652 [2024-07-25 07:36:52.757915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.652 [2024-07-25 07:36:52.757920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.652 [2024-07-25 07:36:52.757925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.652 [2024-07-25 07:36:52.757936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.652 qpair failed and we were unable to recover it. 00:30:45.652 [2024-07-25 07:36:52.767824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.652 [2024-07-25 07:36:52.767902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.652 [2024-07-25 07:36:52.767921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.652 [2024-07-25 07:36:52.767926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.652 [2024-07-25 07:36:52.767930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.652 [2024-07-25 07:36:52.767942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.652 qpair failed and we were unable to recover it. 00:30:45.652 [2024-07-25 07:36:52.777890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.652 [2024-07-25 07:36:52.777965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.652 [2024-07-25 07:36:52.777980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.652 [2024-07-25 07:36:52.777986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.652 [2024-07-25 07:36:52.777990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.652 [2024-07-25 07:36:52.778002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.652 qpair failed and we were unable to recover it. 00:30:45.652 [2024-07-25 07:36:52.787869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.652 [2024-07-25 07:36:52.787994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.652 [2024-07-25 07:36:52.788006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.652 [2024-07-25 07:36:52.788012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.652 [2024-07-25 07:36:52.788016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.652 [2024-07-25 07:36:52.788027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.652 qpair failed and we were unable to recover it. 00:30:45.652 [2024-07-25 07:36:52.797846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.652 [2024-07-25 07:36:52.797929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.652 [2024-07-25 07:36:52.797949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.652 [2024-07-25 07:36:52.797955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.652 [2024-07-25 07:36:52.797960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.652 [2024-07-25 07:36:52.797975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.652 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.807991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.808120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.808140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.808146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.808151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.808170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.818021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.818106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.818120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.818126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.818130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.818142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.827977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.828047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.828059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.828065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.828069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.828080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.838056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.838132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.838145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.838150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.838154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.838165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.848094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.848171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.848184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.848189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.848194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.848208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.858097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.858173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.858189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.858194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.858198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.858214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.868088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.868160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.868172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.868178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.868182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.868193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.878188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.878273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.878285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.878291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.878295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.878306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.888130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.888212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.888225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.888230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.888234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.888245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.898175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.898252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.898265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.898270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.898277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.898289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.908196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.908296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.908309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.908314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.908318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.908330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.918165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.918245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.918255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.918260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.918264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.918274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.928285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.653 [2024-07-25 07:36:52.928364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.653 [2024-07-25 07:36:52.928376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.653 [2024-07-25 07:36:52.928381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.653 [2024-07-25 07:36:52.928385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.653 [2024-07-25 07:36:52.928397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.653 qpair failed and we were unable to recover it. 00:30:45.653 [2024-07-25 07:36:52.938295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:52.938367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:52.938380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:52.938385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:52.938389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:52.938400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:52.948340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:52.948418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:52.948431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:52.948436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:52.948441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:52.948452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:52.958404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:52.958481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:52.958493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:52.958499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:52.958503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:52.958514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:52.968258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:52.968334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:52.968346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:52.968352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:52.968356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:52.968368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:52.978386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:52.978455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:52.978468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:52.978473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:52.978477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:52.978488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:52.988418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:52.988492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:52.988505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:52.988510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:52.988517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:52.988528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:52.998481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:52.998557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:52.998569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:52.998575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:52.998579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:52.998590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:53.008482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:53.008557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:53.008570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:53.008575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:53.008581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:53.008593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.654 [2024-07-25 07:36:53.018537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.654 [2024-07-25 07:36:53.018747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.654 [2024-07-25 07:36:53.018760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.654 [2024-07-25 07:36:53.018765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.654 [2024-07-25 07:36:53.018769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.654 [2024-07-25 07:36:53.018780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.654 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.028530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.028602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.028615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.028621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.028625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.028636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.038593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.038671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.038684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.038689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.038693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.038705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.048628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.048711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.048724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.048730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.048734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.048746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.058618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.058712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.058724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.058730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.058735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.058746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.068535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.068610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.068624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.068629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.068633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.068645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.078737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.078813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.078826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.078835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.078839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.078850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.088647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.088720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.088733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.088738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.088743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.088755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.098756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.098870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.098884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.098889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.098893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.098904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.108767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.108841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.108861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.108868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.108872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.108888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.118825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.118911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.118926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.118932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.118937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.118952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.128836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.128915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.128929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.128935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.128939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.917 [2024-07-25 07:36:53.128951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.917 qpair failed and we were unable to recover it. 00:30:45.917 [2024-07-25 07:36:53.138845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.917 [2024-07-25 07:36:53.138916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.917 [2024-07-25 07:36:53.138929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.917 [2024-07-25 07:36:53.138935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.917 [2024-07-25 07:36:53.138939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.138951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.148866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.148945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.148958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.148963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.148968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.148979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.158970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.159070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.159083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.159088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.159092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.159104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.168926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.168998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.169014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.169019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.169023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.169035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.178963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.179031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.179043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.179049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.179053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.179064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.188985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.189053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.189065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.189071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.189075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.189086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.199059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.199134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.199147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.199152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.199157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.199168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.209043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.209119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.209132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.209137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.209142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.209156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.219092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.219217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.219229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.219235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.219239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.219251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.229121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.229193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.229210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.229216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.229220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.229231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.239102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.239183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.239196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.239206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.239211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.239222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.249185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.249275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.249288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.249293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.249297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.249309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.259179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.259257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.259272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.259278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.259282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.259294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.918 [2024-07-25 07:36:53.269216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.918 [2024-07-25 07:36:53.269292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.918 [2024-07-25 07:36:53.269305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.918 [2024-07-25 07:36:53.269311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.918 [2024-07-25 07:36:53.269315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.918 [2024-07-25 07:36:53.269326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.918 qpair failed and we were unable to recover it. 00:30:45.919 [2024-07-25 07:36:53.279284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:45.919 [2024-07-25 07:36:53.279359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:45.919 [2024-07-25 07:36:53.279372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:45.919 [2024-07-25 07:36:53.279377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:45.919 [2024-07-25 07:36:53.279381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:45.919 [2024-07-25 07:36:53.279393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:45.919 qpair failed and we were unable to recover it. 00:30:46.179 [2024-07-25 07:36:53.289268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.179 [2024-07-25 07:36:53.289348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.179 [2024-07-25 07:36:53.289360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.179 [2024-07-25 07:36:53.289366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.179 [2024-07-25 07:36:53.289370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.179 [2024-07-25 07:36:53.289382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.179 qpair failed and we were unable to recover it. 00:30:46.179 [2024-07-25 07:36:53.299328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.179 [2024-07-25 07:36:53.299403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.179 [2024-07-25 07:36:53.299416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.179 [2024-07-25 07:36:53.299422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.179 [2024-07-25 07:36:53.299429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.179 [2024-07-25 07:36:53.299441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.179 qpair failed and we were unable to recover it. 00:30:46.179 [2024-07-25 07:36:53.309334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.179 [2024-07-25 07:36:53.309403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.179 [2024-07-25 07:36:53.309416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.179 [2024-07-25 07:36:53.309421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.309425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.309437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.319419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.319498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.319511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.319516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.319520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.319532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.329400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.329483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.329496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.329501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.329505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.329517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.339428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.339496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.339509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.339514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.339518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.339530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.349452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.349663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.349676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.349681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.349685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.349696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.359532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.359645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.359658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.359664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.359668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.359679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.369506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.369590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.369602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.369607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.369611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.369622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.379411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.379609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.379622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.379627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.379631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.379642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.389558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.389666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.389678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.389684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.389691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.389702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.399642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.399725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.399738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.399743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.399747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.399758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.409646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.409769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.409782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.409787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.409791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.409803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.419632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.419707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.419720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.419725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.419729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.419740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.429649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.429724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.429736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.429742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.429746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.429757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.439736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.439828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.439841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.439847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.439851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.439862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.449715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.449797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.449810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.449815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.449819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.449830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.459796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.459915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.459927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.459933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.459937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.459948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.469751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.469822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.469834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.469839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.469844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.469854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.479875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.479960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.479972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.479981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.479985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.479996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.489855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.489951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.489963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.489969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.489973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.489984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.499878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.499955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.499974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.499981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.499985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.500000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.509860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.509940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.509954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.509959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.180 [2024-07-25 07:36:53.509963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.180 [2024-07-25 07:36:53.509976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.180 qpair failed and we were unable to recover it. 00:30:46.180 [2024-07-25 07:36:53.520014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.180 [2024-07-25 07:36:53.520098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.180 [2024-07-25 07:36:53.520111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.180 [2024-07-25 07:36:53.520117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.181 [2024-07-25 07:36:53.520121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.181 [2024-07-25 07:36:53.520133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.181 qpair failed and we were unable to recover it. 00:30:46.181 [2024-07-25 07:36:53.529960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.181 [2024-07-25 07:36:53.530043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.181 [2024-07-25 07:36:53.530056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.181 [2024-07-25 07:36:53.530062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.181 [2024-07-25 07:36:53.530066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.181 [2024-07-25 07:36:53.530078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.181 qpair failed and we were unable to recover it. 00:30:46.181 [2024-07-25 07:36:53.539981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.181 [2024-07-25 07:36:53.540085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.181 [2024-07-25 07:36:53.540098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.181 [2024-07-25 07:36:53.540103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.181 [2024-07-25 07:36:53.540107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.181 [2024-07-25 07:36:53.540118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.181 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.550008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.550082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.550095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.550100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.550104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.550116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.560053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.560131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.560144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.560150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.560154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.560165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.570057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.570174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.570189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.570195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.570199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.570214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.580062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.580132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.580145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.580150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.580154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.580165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.590083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.590156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.590168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.590174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.590178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.590189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.600150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.600233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.600245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.600251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.600255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.600266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.610027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.610107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.610120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.610125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.610129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.610143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.620180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.620258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.620271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.620276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.620281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.620292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.630175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.630255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.630268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.630273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.630277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.630289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.640308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.640384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.640397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.640402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.640407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.640418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.650291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.650378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.650391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.443 [2024-07-25 07:36:53.650396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.443 [2024-07-25 07:36:53.650401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.443 [2024-07-25 07:36:53.650412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.443 qpair failed and we were unable to recover it. 00:30:46.443 [2024-07-25 07:36:53.660275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.443 [2024-07-25 07:36:53.660350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.443 [2024-07-25 07:36:53.660365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.660371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.660375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.660386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.670345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.670431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.670443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.670449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.670453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.670464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.680379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.680483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.680495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.680501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.680505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.680516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.690415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.690491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.690504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.690509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.690513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.690524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.700458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.700581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.700594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.700599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.700604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.700618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.710384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.710466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.710478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.710484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.710488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.710499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.720539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.720615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.720628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.720633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.720638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.720649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.730498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.730576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.730591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.730597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.730601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.730613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.740491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.740569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.740582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.740588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.740592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.740603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.750530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.750605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.750617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.750623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.750627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.750638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.760607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.760687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.760699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.760705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.760709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.760720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.770636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.770844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.770856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.770861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.770866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.770877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.780622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.780699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.780714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.780719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.780723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.444 [2024-07-25 07:36:53.780735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.444 qpair failed and we were unable to recover it. 00:30:46.444 [2024-07-25 07:36:53.790619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.444 [2024-07-25 07:36:53.790689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.444 [2024-07-25 07:36:53.790702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.444 [2024-07-25 07:36:53.790707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.444 [2024-07-25 07:36:53.790715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.445 [2024-07-25 07:36:53.790727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.445 qpair failed and we were unable to recover it. 00:30:46.445 [2024-07-25 07:36:53.800687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.445 [2024-07-25 07:36:53.800764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.445 [2024-07-25 07:36:53.800777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.445 [2024-07-25 07:36:53.800782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.445 [2024-07-25 07:36:53.800786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.445 [2024-07-25 07:36:53.800797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.445 qpair failed and we were unable to recover it. 00:30:46.707 [2024-07-25 07:36:53.810620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.707 [2024-07-25 07:36:53.810699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.707 [2024-07-25 07:36:53.810711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.707 [2024-07-25 07:36:53.810717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.707 [2024-07-25 07:36:53.810721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.707 [2024-07-25 07:36:53.810732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.707 qpair failed and we were unable to recover it. 00:30:46.707 [2024-07-25 07:36:53.820720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.707 [2024-07-25 07:36:53.820790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.707 [2024-07-25 07:36:53.820803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.707 [2024-07-25 07:36:53.820808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.707 [2024-07-25 07:36:53.820812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.707 [2024-07-25 07:36:53.820824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.707 qpair failed and we were unable to recover it. 00:30:46.707 [2024-07-25 07:36:53.830756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.707 [2024-07-25 07:36:53.830829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.707 [2024-07-25 07:36:53.830848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.707 [2024-07-25 07:36:53.830855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.707 [2024-07-25 07:36:53.830860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.707 [2024-07-25 07:36:53.830875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.707 qpair failed and we were unable to recover it. 00:30:46.707 [2024-07-25 07:36:53.840828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.707 [2024-07-25 07:36:53.840916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.707 [2024-07-25 07:36:53.840935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.707 [2024-07-25 07:36:53.840942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.707 [2024-07-25 07:36:53.840946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.707 [2024-07-25 07:36:53.840962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.707 qpair failed and we were unable to recover it. 00:30:46.707 [2024-07-25 07:36:53.850832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.850912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.850926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.850931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.850935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.850947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.860812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.860889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.860908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.860915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.860919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.860934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.870892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.870967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.870987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.870993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.870998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.871013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.880947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.881029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.881048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.881058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.881063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.881078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.890916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.890994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.891013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.891020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.891025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.891040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.900961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.901036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.901055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.901061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.901066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.901081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.910932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.911020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.911034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.911039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.911043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.911055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.921032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.921106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.921118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.921124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.921128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.921139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.931018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.931092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.931104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.931110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.931114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.931126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.941084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.941205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.941219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.941225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.941229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.941241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.951076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.951193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.951211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.951216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.951221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.951232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.961123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.961202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.961215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.961221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.961225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.961236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.971107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.971182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.708 [2024-07-25 07:36:53.971195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.708 [2024-07-25 07:36:53.971208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.708 [2024-07-25 07:36:53.971212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.708 [2024-07-25 07:36:53.971224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.708 qpair failed and we were unable to recover it. 00:30:46.708 [2024-07-25 07:36:53.981290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.708 [2024-07-25 07:36:53.981363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:53.981376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:53.981381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:53.981385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:53.981397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:53.991217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:53.991291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:53.991304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:53.991309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:53.991313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:53.991324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.001275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.001355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.001368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.001373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.001378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.001389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.011234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.011313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.011326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.011331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.011335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.011346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.021255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.021323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.021336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.021341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.021346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.021357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.031290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.031366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.031378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.031384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.031388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.031400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.041336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.041411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.041423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.041429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.041433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.041445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.051330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.051404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.051417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.051422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.051427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.051438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.061338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.061411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.061426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.061432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.061436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.061448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.709 [2024-07-25 07:36:54.071297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.709 [2024-07-25 07:36:54.071401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.709 [2024-07-25 07:36:54.071415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.709 [2024-07-25 07:36:54.071420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.709 [2024-07-25 07:36:54.071424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.709 [2024-07-25 07:36:54.071435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.709 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.081461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.081541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.081553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.081559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.081563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.081575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.091454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.091530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.091543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.091548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.091552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.091564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.101466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.101536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.101548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.101554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.101558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.101572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.111513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.111580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.111593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.111598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.111602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.111614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.121595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.121722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.121735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.121740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.121744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.121755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.131571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.131649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.131662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.131668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.131672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.131684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.141577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.141648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.141660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.141665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.141669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.141680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.151587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.151656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.151671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.151677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.151681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.151692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.161648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.161725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.161737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.161743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.161747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.161758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.171629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.171702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.171714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.171720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.171724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.171735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.181665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.181733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.181745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.181750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.181754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.181765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.191583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.191671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.191684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.191689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.191696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.191708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.201755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.201830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.201842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.201847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.201852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.201863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.211671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.211762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.211775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.211781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.211785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.211796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.221800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.221881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.221900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.221907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.221911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.221927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.231797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.231871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.231891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.971 [2024-07-25 07:36:54.231897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.971 [2024-07-25 07:36:54.231902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.971 [2024-07-25 07:36:54.231918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.971 qpair failed and we were unable to recover it. 00:30:46.971 [2024-07-25 07:36:54.241739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.971 [2024-07-25 07:36:54.241816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.971 [2024-07-25 07:36:54.241835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.241842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.241847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.241862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.251882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.251963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.251977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.251983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.251987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.251999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.261911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.261986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.262005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.262012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.262017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.262032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.271881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.271951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.271965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.271971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.271975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.271987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.281928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.281997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.282010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.282020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.282024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.282036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.291954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.292027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.292039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.292045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.292049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.292061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.302047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.302158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.302171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.302176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.302180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.302191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.312010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.312080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.312092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.312098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.312102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.312113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.322059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.322128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.322140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.322146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.322150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.322161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:46.972 [2024-07-25 07:36:54.332097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:46.972 [2024-07-25 07:36:54.332176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:46.972 [2024-07-25 07:36:54.332189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:46.972 [2024-07-25 07:36:54.332195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.972 [2024-07-25 07:36:54.332199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:46.972 [2024-07-25 07:36:54.332214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:46.972 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.342087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.342157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.342170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.342175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.342180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.342191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.352192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.352312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.352324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.352330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.352335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.352346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.362145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.362214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.362226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.362231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.362236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.362247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.372128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.372204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.372217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.372225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.372230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.372241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.382165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.382248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.382261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.382266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.382270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.382282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.392219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.392289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.392302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.392307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.392311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.392323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.402260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.402357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.402370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.402375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.402379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.402390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.412281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.412358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.412371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.412376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.412380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.412391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.422372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.422444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.422456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.233 [2024-07-25 07:36:54.422462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.233 [2024-07-25 07:36:54.422466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d98000b90 00:30:47.233 [2024-07-25 07:36:54.422477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:47.233 qpair failed and we were unable to recover it. 00:30:47.233 [2024-07-25 07:36:54.432497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.233 [2024-07-25 07:36:54.432735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.233 [2024-07-25 07:36:54.432804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.234 [2024-07-25 07:36:54.432829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.234 [2024-07-25 07:36:54.432849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d8c000b90 00:30:47.234 [2024-07-25 07:36:54.432905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:47.234 qpair failed and we were unable to recover it. 00:30:47.234 [2024-07-25 07:36:54.442468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:47.234 [2024-07-25 07:36:54.442644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:47.234 [2024-07-25 07:36:54.442678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:47.234 [2024-07-25 07:36:54.442694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:47.234 [2024-07-25 07:36:54.442708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9d8c000b90 00:30:47.234 [2024-07-25 07:36:54.442742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:47.234 qpair failed and we were unable to recover it. 00:30:47.234 [2024-07-25 07:36:54.442861] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:47.234 A controller has encountered a failure and is being reset. 00:30:47.234 [2024-07-25 07:36:54.442900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189f20 (9): Bad file descriptor 00:30:47.234 Controller properly reset. 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Write completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 Read completed with error (sct=0, sc=8) 00:30:47.234 starting I/O failed 00:30:47.234 [2024-07-25 07:36:54.498659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:47.234 Initializing NVMe Controllers 00:30:47.234 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:47.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:47.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:47.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:47.234 Initialization complete. Launching workers. 00:30:47.234 Starting thread on core 1 00:30:47.234 Starting thread on core 2 00:30:47.234 Starting thread on core 3 00:30:47.234 Starting thread on core 0 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:47.234 00:30:47.234 real 0m11.367s 00:30:47.234 user 0m20.654s 00:30:47.234 sys 0m4.159s 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.234 ************************************ 00:30:47.234 END TEST nvmf_target_disconnect_tc2 00:30:47.234 ************************************ 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:47.234 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:47.234 rmmod nvme_tcp 00:30:47.234 rmmod nvme_fabrics 00:30:47.494 rmmod nvme_keyring 00:30:47.494 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:47.494 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 287033 ']' 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 287033 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 287033 ']' 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 287033 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 287033 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 287033' 00:30:47.495 killing process with pid 287033 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 287033 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 287033 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.495 07:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.042 07:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:50.042 00:30:50.042 real 0m21.388s 00:30:50.042 user 0m48.229s 00:30:50.042 sys 0m10.058s 00:30:50.042 07:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:50.042 07:36:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:50.042 ************************************ 00:30:50.042 END TEST nvmf_target_disconnect 00:30:50.042 ************************************ 00:30:50.042 07:36:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:50.042 00:30:50.042 real 6m18.632s 00:30:50.042 user 11m7.988s 00:30:50.042 sys 2m6.184s 00:30:50.042 07:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:50.042 07:36:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.042 ************************************ 00:30:50.042 END TEST nvmf_host 00:30:50.042 ************************************ 00:30:50.042 00:30:50.042 real 22m42.608s 00:30:50.042 user 47m15.864s 00:30:50.042 sys 7m16.799s 00:30:50.042 07:36:56 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:50.042 07:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.042 ************************************ 00:30:50.042 END TEST nvmf_tcp 00:30:50.042 ************************************ 00:30:50.042 07:36:57 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:30:50.042 07:36:57 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:50.042 07:36:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:50.042 07:36:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:50.042 07:36:57 -- common/autotest_common.sh@10 -- # set +x 00:30:50.042 ************************************ 00:30:50.042 START TEST spdkcli_nvmf_tcp 00:30:50.042 ************************************ 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:50.043 * Looking for test storage... 00:30:50.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=288930 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 288930 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 288930 ']' 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:50.043 07:36:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.043 [2024-07-25 07:36:57.255276] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:30:50.043 [2024-07-25 07:36:57.255347] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288930 ] 00:30:50.043 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.043 [2024-07-25 07:36:57.322004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:50.043 [2024-07-25 07:36:57.399158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.043 [2024-07-25 07:36:57.399159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.986 07:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:50.986 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:50.986 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:50.986 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:50.986 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:50.986 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:50.986 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:50.986 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:50.986 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:50.986 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:50.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:50.986 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:50.986 ' 00:30:53.533 [2024-07-25 07:37:00.392150] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.475 [2024-07-25 07:37:01.556002] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:56.389 [2024-07-25 07:37:03.698294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:58.303 [2024-07-25 07:37:05.535769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:59.689 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:59.689 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:59.689 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:59.689 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:59.689 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:59.689 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:59.689 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:59.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:59.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:59.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:59.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:59.689 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:59.951 07:37:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.213 07:37:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:00.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:00.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:00.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:00.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:00.213 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:00.213 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:00.213 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:00.213 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:00.213 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:00.213 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:00.213 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:00.213 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:00.213 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:00.213 ' 00:31:05.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:05.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:05.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:05.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:05.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:05.506 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:05.506 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:05.506 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:05.506 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:05.506 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:05.506 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:05.506 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:05.506 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:05.506 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 288930 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 288930 ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 288930 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 288930 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 288930' 00:31:05.506 killing process with pid 288930 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 288930 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 288930 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 288930 ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 288930 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 288930 ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 288930 00:31:05.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (288930) - No such process 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 288930 is not found' 00:31:05.506 Process with pid 288930 is not found 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:05.506 00:31:05.506 real 0m15.558s 00:31:05.506 user 0m32.019s 00:31:05.506 sys 0m0.710s 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.506 07:37:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.506 ************************************ 00:31:05.506 END TEST spdkcli_nvmf_tcp 00:31:05.506 ************************************ 00:31:05.506 07:37:12 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:05.506 07:37:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:05.506 07:37:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.506 07:37:12 -- common/autotest_common.sh@10 -- # set +x 00:31:05.506 ************************************ 00:31:05.506 START TEST nvmf_identify_passthru 00:31:05.506 ************************************ 00:31:05.506 07:37:12 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:05.506 * Looking for test storage... 00:31:05.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.506 07:37:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.506 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.507 07:37:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.507 07:37:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.507 07:37:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.507 07:37:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.507 07:37:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.507 07:37:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.507 07:37:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:05.507 07:37:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.507 07:37:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.507 07:37:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:05.507 07:37:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.507 07:37:12 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.507 07:37:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:13.654 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:13.654 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:13.654 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:13.654 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.654 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:31:13.655 00:31:13.655 --- 10.0.0.2 ping statistics --- 00:31:13.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.655 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:31:13.655 00:31:13.655 --- 10.0.0.1 ping statistics --- 00:31:13.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.655 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.655 07:37:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.655 07:37:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:13.655 07:37:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:13.655 07:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:13.655 07:37:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:13.655 07:37:20 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:31:13.655 07:37:20 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:13.655 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:13.655 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.655 07:37:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:13.655 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:13.655 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:13.655 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:13.916 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:13.916 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:13.916 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=295743 00:31:13.916 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:13.916 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 295743 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 295743 ']' 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:13.916 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:13.916 [2024-07-25 07:37:21.070689] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:31:13.916 [2024-07-25 07:37:21.070727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.916 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.917 [2024-07-25 07:37:21.126157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.917 [2024-07-25 07:37:21.191230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.917 [2024-07-25 07:37:21.191267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.917 [2024-07-25 07:37:21.191275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.917 [2024-07-25 07:37:21.191281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.917 [2024-07-25 07:37:21.191286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.917 [2024-07-25 07:37:21.191474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.917 [2024-07-25 07:37:21.191595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.917 [2024-07-25 07:37:21.191755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.917 [2024-07-25 07:37:21.191757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.859 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:31:14.860 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:14.860 INFO: Log level set to 20 00:31:14.860 INFO: Requests: 00:31:14.860 { 00:31:14.860 "jsonrpc": "2.0", 00:31:14.860 "method": "nvmf_set_config", 00:31:14.860 "id": 1, 00:31:14.860 "params": { 00:31:14.860 "admin_cmd_passthru": { 00:31:14.860 "identify_ctrlr": true 00:31:14.860 } 00:31:14.860 } 00:31:14.860 } 00:31:14.860 00:31:14.860 INFO: response: 00:31:14.860 { 00:31:14.860 "jsonrpc": "2.0", 00:31:14.860 "id": 1, 00:31:14.860 "result": true 00:31:14.860 } 00:31:14.860 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.860 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:14.860 INFO: Setting log level to 20 00:31:14.860 INFO: Setting log level to 20 00:31:14.860 INFO: Log level set to 20 00:31:14.860 INFO: Log level set to 20 00:31:14.860 INFO: Requests: 00:31:14.860 { 00:31:14.860 "jsonrpc": "2.0", 00:31:14.860 "method": "framework_start_init", 00:31:14.860 "id": 1 00:31:14.860 } 00:31:14.860 00:31:14.860 INFO: Requests: 00:31:14.860 { 00:31:14.860 "jsonrpc": "2.0", 00:31:14.860 "method": "framework_start_init", 00:31:14.860 "id": 1 00:31:14.860 } 00:31:14.860 00:31:14.860 [2024-07-25 07:37:21.952944] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:14.860 INFO: response: 00:31:14.860 { 00:31:14.860 "jsonrpc": "2.0", 00:31:14.860 "id": 1, 00:31:14.860 "result": true 00:31:14.860 } 00:31:14.860 00:31:14.860 INFO: response: 00:31:14.860 { 00:31:14.860 "jsonrpc": "2.0", 00:31:14.860 "id": 1, 00:31:14.860 "result": true 00:31:14.860 } 00:31:14.860 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.860 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:14.860 INFO: Setting log level to 40 00:31:14.860 INFO: Setting log level to 40 00:31:14.860 INFO: Setting log level to 40 00:31:14.860 [2024-07-25 07:37:21.966265] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.860 07:37:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:14.860 07:37:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:14.860 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:14.860 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.860 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.121 Nvme0n1 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.121 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.121 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.121 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.121 [2024-07-25 07:37:22.350486] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.121 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.121 [ 00:31:15.121 { 00:31:15.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:15.121 "subtype": "Discovery", 00:31:15.121 "listen_addresses": [], 00:31:15.121 "allow_any_host": true, 00:31:15.121 "hosts": [] 00:31:15.121 }, 00:31:15.121 { 00:31:15.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:15.121 "subtype": "NVMe", 00:31:15.121 "listen_addresses": [ 00:31:15.121 { 00:31:15.121 "trtype": "TCP", 00:31:15.121 "adrfam": "IPv4", 00:31:15.121 "traddr": "10.0.0.2", 00:31:15.121 "trsvcid": "4420" 00:31:15.121 } 00:31:15.121 ], 00:31:15.121 "allow_any_host": true, 00:31:15.121 "hosts": [], 00:31:15.121 "serial_number": "SPDK00000000000001", 00:31:15.121 "model_number": "SPDK bdev Controller", 00:31:15.121 "max_namespaces": 1, 00:31:15.121 "min_cntlid": 1, 00:31:15.121 "max_cntlid": 65519, 00:31:15.121 "namespaces": [ 00:31:15.121 { 00:31:15.121 "nsid": 1, 00:31:15.121 "bdev_name": "Nvme0n1", 00:31:15.121 "name": "Nvme0n1", 00:31:15.121 "nguid": "36344730526054870025384500000044", 00:31:15.121 "uuid": "36344730-5260-5487-0025-384500000044" 00:31:15.121 } 00:31:15.121 ] 00:31:15.121 } 00:31:15.121 ] 00:31:15.121 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.121 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:15.121 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:15.121 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:15.121 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:15.382 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:15.382 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.382 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.382 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:15.382 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.682 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:15.682 07:37:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:15.682 rmmod nvme_tcp 00:31:15.682 rmmod nvme_fabrics 00:31:15.682 rmmod nvme_keyring 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 295743 ']' 00:31:15.682 07:37:22 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 295743 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 295743 ']' 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 295743 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 295743 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 295743' 00:31:15.682 killing process with pid 295743 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 295743 00:31:15.682 07:37:22 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 295743 00:31:15.978 07:37:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:15.978 07:37:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:15.978 07:37:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:15.978 07:37:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:15.978 07:37:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:15.978 07:37:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.978 07:37:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:15.978 07:37:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.891 07:37:25 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:17.891 00:31:17.891 real 0m12.506s 00:31:17.891 user 0m10.039s 00:31:17.891 sys 0m5.969s 00:31:17.891 07:37:25 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:17.891 07:37:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:17.891 ************************************ 00:31:17.891 END TEST nvmf_identify_passthru 00:31:17.891 ************************************ 00:31:17.891 07:37:25 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:17.891 07:37:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:17.891 07:37:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:17.891 07:37:25 -- common/autotest_common.sh@10 -- # set +x 00:31:18.153 ************************************ 00:31:18.153 START TEST nvmf_dif 00:31:18.153 ************************************ 00:31:18.153 07:37:25 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:18.153 * Looking for test storage... 00:31:18.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:18.153 07:37:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.153 07:37:25 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.153 07:37:25 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.153 07:37:25 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.153 07:37:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.153 07:37:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.153 07:37:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.153 07:37:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:18.153 07:37:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:18.153 07:37:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:18.153 07:37:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:18.153 07:37:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:18.153 07:37:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:18.153 07:37:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.153 07:37:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:18.153 07:37:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:18.153 07:37:25 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:18.153 07:37:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:26.298 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:26.298 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:26.298 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:26.298 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.298 07:37:32 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:26.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:31:26.299 00:31:26.299 --- 10.0.0.2 ping statistics --- 00:31:26.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.299 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:31:26.299 00:31:26.299 --- 10.0.0.1 ping statistics --- 00:31:26.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.299 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:26.299 07:37:32 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:28.850 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:28.850 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:28.850 07:37:36 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.850 07:37:36 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:28.850 07:37:36 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:28.850 07:37:36 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.850 07:37:36 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:28.850 07:37:36 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:29.112 07:37:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:29.112 07:37:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:29.112 07:37:36 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:29.112 07:37:36 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=301753 00:31:29.112 07:37:36 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 301753 00:31:29.112 07:37:36 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 301753 ']' 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:29.112 07:37:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:29.112 [2024-07-25 07:37:36.327712] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:31:29.112 [2024-07-25 07:37:36.327799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.112 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.112 [2024-07-25 07:37:36.400756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.112 [2024-07-25 07:37:36.474849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.112 [2024-07-25 07:37:36.474888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.112 [2024-07-25 07:37:36.474900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.112 [2024-07-25 07:37:36.474906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.112 [2024-07-25 07:37:36.474912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.112 [2024-07-25 07:37:36.474930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:31:30.067 07:37:37 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.067 07:37:37 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.067 07:37:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:30.067 07:37:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.067 [2024-07-25 07:37:37.130020] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.067 07:37:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:30.067 07:37:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.067 ************************************ 00:31:30.067 START TEST fio_dif_1_default 00:31:30.067 ************************************ 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.067 bdev_null0 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.067 [2024-07-25 07:37:37.214366] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.067 { 00:31:30.067 "params": { 00:31:30.067 "name": "Nvme$subsystem", 00:31:30.067 "trtype": "$TEST_TRANSPORT", 00:31:30.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.067 "adrfam": "ipv4", 00:31:30.067 "trsvcid": "$NVMF_PORT", 00:31:30.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.067 "hdgst": ${hdgst:-false}, 00:31:30.067 "ddgst": ${ddgst:-false} 00:31:30.067 }, 00:31:30.067 "method": "bdev_nvme_attach_controller" 00:31:30.067 } 00:31:30.067 EOF 00:31:30.067 )") 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.067 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:30.068 "params": { 00:31:30.068 "name": "Nvme0", 00:31:30.068 "trtype": "tcp", 00:31:30.068 "traddr": "10.0.0.2", 00:31:30.068 "adrfam": "ipv4", 00:31:30.068 "trsvcid": "4420", 00:31:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.068 "hdgst": false, 00:31:30.068 "ddgst": false 00:31:30.068 }, 00:31:30.068 "method": "bdev_nvme_attach_controller" 00:31:30.068 }' 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.068 07:37:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.329 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:30.329 fio-3.35 00:31:30.329 Starting 1 thread 00:31:30.329 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.568 00:31:42.568 filename0: (groupid=0, jobs=1): err= 0: pid=302330: Thu Jul 25 07:37:48 2024 00:31:42.568 read: IOPS=181, BW=724KiB/s (742kB/s)(7264KiB/10028msec) 00:31:42.568 slat (nsec): min=5385, max=32606, avg=6146.45, stdev=1421.62 00:31:42.568 clat (usec): min=1333, max=43094, avg=22069.56, stdev=20418.27 00:31:42.568 lat (usec): min=1339, max=43126, avg=22075.71, stdev=20418.28 00:31:42.568 clat percentiles (usec): 00:31:42.568 | 1.00th=[ 1450], 5.00th=[ 1532], 10.00th=[ 1549], 20.00th=[ 1565], 00:31:42.568 | 30.00th=[ 1582], 40.00th=[ 1598], 50.00th=[41681], 60.00th=[42206], 00:31:42.568 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.568 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:42.568 | 99.99th=[43254] 00:31:42.568 bw ( KiB/s): min= 672, max= 768, per=99.95%, avg=724.80, stdev=31.62, samples=20 00:31:42.568 iops : min= 168, max= 192, avg=181.20, stdev= 7.90, samples=20 00:31:42.568 lat (msec) : 2=49.78%, 50=50.22% 00:31:42.568 cpu : usr=95.83%, sys=3.97%, ctx=10, majf=0, minf=224 00:31:42.568 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.568 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.568 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:42.568 00:31:42.568 Run status group 0 (all jobs): 00:31:42.568 READ: bw=724KiB/s (742kB/s), 724KiB/s-724KiB/s (742kB/s-742kB/s), io=7264KiB (7438kB), run=10028-10028msec 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.568 00:31:42.568 real 0m11.255s 00:31:42.568 user 0m25.432s 00:31:42.568 sys 0m0.674s 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 ************************************ 00:31:42.568 END TEST fio_dif_1_default 00:31:42.568 ************************************ 00:31:42.568 07:37:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:42.568 07:37:48 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:42.568 07:37:48 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:42.568 07:37:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 ************************************ 00:31:42.568 START TEST fio_dif_1_multi_subsystems 00:31:42.568 ************************************ 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 bdev_null0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.568 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.569 [2024-07-25 07:37:48.547667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.569 bdev_null1 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.569 { 00:31:42.569 "params": { 00:31:42.569 "name": "Nvme$subsystem", 00:31:42.569 "trtype": "$TEST_TRANSPORT", 00:31:42.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.569 "adrfam": "ipv4", 00:31:42.569 "trsvcid": "$NVMF_PORT", 00:31:42.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.569 "hdgst": ${hdgst:-false}, 00:31:42.569 "ddgst": ${ddgst:-false} 00:31:42.569 }, 00:31:42.569 "method": "bdev_nvme_attach_controller" 00:31:42.569 } 00:31:42.569 EOF 00:31:42.569 )") 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:42.569 { 00:31:42.569 "params": { 00:31:42.569 "name": "Nvme$subsystem", 00:31:42.569 "trtype": "$TEST_TRANSPORT", 00:31:42.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.569 "adrfam": "ipv4", 00:31:42.569 "trsvcid": "$NVMF_PORT", 00:31:42.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.569 "hdgst": ${hdgst:-false}, 00:31:42.569 "ddgst": ${ddgst:-false} 00:31:42.569 }, 00:31:42.569 "method": "bdev_nvme_attach_controller" 00:31:42.569 } 00:31:42.569 EOF 00:31:42.569 )") 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:42.569 "params": { 00:31:42.569 "name": "Nvme0", 00:31:42.569 "trtype": "tcp", 00:31:42.569 "traddr": "10.0.0.2", 00:31:42.569 "adrfam": "ipv4", 00:31:42.569 "trsvcid": "4420", 00:31:42.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.569 "hdgst": false, 00:31:42.569 "ddgst": false 00:31:42.569 }, 00:31:42.569 "method": "bdev_nvme_attach_controller" 00:31:42.569 },{ 00:31:42.569 "params": { 00:31:42.569 "name": "Nvme1", 00:31:42.569 "trtype": "tcp", 00:31:42.569 "traddr": "10.0.0.2", 00:31:42.569 "adrfam": "ipv4", 00:31:42.569 "trsvcid": "4420", 00:31:42.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.569 "hdgst": false, 00:31:42.569 "ddgst": false 00:31:42.569 }, 00:31:42.569 "method": "bdev_nvme_attach_controller" 00:31:42.569 }' 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.569 07:37:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.569 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:42.569 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:42.570 fio-3.35 00:31:42.570 Starting 2 threads 00:31:42.570 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.572 00:31:52.572 filename0: (groupid=0, jobs=1): err= 0: pid=304638: Thu Jul 25 07:37:59 2024 00:31:52.572 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:31:52.572 slat (nsec): min=5389, max=55374, avg=6389.19, stdev=2513.79 00:31:52.572 clat (usec): min=41777, max=43775, avg=41994.17, stdev=135.54 00:31:52.572 lat (usec): min=41803, max=43811, avg=42000.56, stdev=136.06 00:31:52.572 clat percentiles (usec): 00:31:52.572 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:52.572 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:52.572 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:52.572 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:31:52.572 | 99.99th=[43779] 00:31:52.572 bw ( KiB/s): min= 352, max= 384, per=49.99%, avg=380.80, stdev= 9.85, samples=20 00:31:52.572 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:52.572 lat (msec) : 50=100.00% 00:31:52.572 cpu : usr=97.13%, sys=2.65%, ctx=14, majf=0, minf=192 00:31:52.572 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.572 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.572 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:52.572 filename1: (groupid=0, jobs=1): err= 0: pid=304639: Thu Jul 25 07:37:59 2024 00:31:52.572 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:31:52.572 slat (nsec): min=5383, max=36063, avg=6253.46, stdev=1925.24 00:31:52.572 clat (usec): min=41835, max=43843, avg=42003.44, stdev=167.09 00:31:52.572 lat (usec): min=41840, max=43879, avg=42009.69, stdev=168.04 00:31:52.572 clat percentiles (usec): 00:31:52.572 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:52.572 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:52.572 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:52.572 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:31:52.572 | 99.99th=[43779] 00:31:52.572 bw ( KiB/s): min= 352, max= 384, per=49.99%, avg=380.63, stdev=10.09, samples=19 00:31:52.572 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:52.572 lat (msec) : 50=100.00% 00:31:52.572 cpu : usr=97.08%, sys=2.72%, ctx=13, majf=0, minf=24 00:31:52.572 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.572 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.572 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:52.572 00:31:52.572 Run status group 0 (all jobs): 00:31:52.573 READ: bw=760KiB/s (778kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7632KiB (7815kB), run=10001-10041msec 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.833 07:37:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:52.833 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.833 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.834 00:31:52.834 real 0m11.521s 00:31:52.834 user 0m38.576s 00:31:52.834 sys 0m0.910s 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 ************************************ 00:31:52.834 END TEST fio_dif_1_multi_subsystems 00:31:52.834 ************************************ 00:31:52.834 07:38:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:52.834 07:38:00 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:52.834 07:38:00 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 ************************************ 00:31:52.834 START TEST fio_dif_rand_params 00:31:52.834 ************************************ 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 bdev_null0 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:52.834 [2024-07-25 07:38:00.136820] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:52.834 { 00:31:52.834 "params": { 00:31:52.834 "name": "Nvme$subsystem", 00:31:52.834 "trtype": "$TEST_TRANSPORT", 00:31:52.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:52.834 "adrfam": "ipv4", 00:31:52.834 "trsvcid": "$NVMF_PORT", 00:31:52.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:52.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:52.834 "hdgst": ${hdgst:-false}, 00:31:52.834 "ddgst": ${ddgst:-false} 00:31:52.834 }, 00:31:52.834 "method": "bdev_nvme_attach_controller" 00:31:52.834 } 00:31:52.834 EOF 00:31:52.834 )") 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:52.834 "params": { 00:31:52.834 "name": "Nvme0", 00:31:52.834 "trtype": "tcp", 00:31:52.834 "traddr": "10.0.0.2", 00:31:52.834 "adrfam": "ipv4", 00:31:52.834 "trsvcid": "4420", 00:31:52.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:52.834 "hdgst": false, 00:31:52.834 "ddgst": false 00:31:52.834 }, 00:31:52.834 "method": "bdev_nvme_attach_controller" 00:31:52.834 }' 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:52.834 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:53.129 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:53.129 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:53.129 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:53.129 07:38:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:53.399 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:53.399 ... 00:31:53.399 fio-3.35 00:31:53.399 Starting 3 threads 00:31:53.399 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.034 00:32:00.034 filename0: (groupid=0, jobs=1): err= 0: pid=306954: Thu Jul 25 07:38:06 2024 00:32:00.034 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(124MiB/5032msec) 00:32:00.034 slat (nsec): min=5410, max=32925, avg=7836.67, stdev=1738.48 00:32:00.034 clat (usec): min=5663, max=56916, avg=15155.17, stdev=13139.20 00:32:00.034 lat (usec): min=5675, max=56925, avg=15163.00, stdev=13139.20 00:32:00.034 clat percentiles (usec): 00:32:00.034 | 1.00th=[ 5932], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9110], 00:32:00.034 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10814], 60.00th=[11469], 00:32:00.034 | 70.00th=[12256], 80.00th=[13304], 90.00th=[49021], 95.00th=[53216], 00:32:00.034 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:32:00.034 | 99.99th=[56886] 00:32:00.034 bw ( KiB/s): min=21248, max=29696, per=40.47%, avg=25395.20, stdev=3228.71, samples=10 00:32:00.034 iops : min= 166, max= 232, avg=198.40, stdev=25.22, samples=10 00:32:00.034 lat (msec) : 10=34.87%, 20=54.57%, 50=0.80%, 100=9.75% 00:32:00.034 cpu : usr=96.00%, sys=3.68%, ctx=13, majf=0, minf=86 00:32:00.034 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.034 issued rwts: total=995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.034 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:00.034 filename0: (groupid=0, jobs=1): err= 0: pid=306955: Thu Jul 25 07:38:06 2024 00:32:00.034 read: IOPS=165, BW=20.6MiB/s (21.6MB/s)(103MiB/5003msec) 00:32:00.034 slat (nsec): min=5388, max=49755, avg=8400.12, stdev=2630.46 00:32:00.034 clat (usec): min=6917, max=95087, avg=18152.30, stdev=16254.73 00:32:00.034 lat (usec): min=6926, max=95096, avg=18160.70, stdev=16254.85 00:32:00.034 clat percentiles (usec): 00:32:00.034 | 1.00th=[ 7373], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9634], 00:32:00.034 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[12256], 00:32:00.034 | 70.00th=[13173], 80.00th=[14484], 90.00th=[52691], 95.00th=[54264], 00:32:00.034 | 99.00th=[55837], 99.50th=[62129], 99.90th=[94897], 99.95th=[94897], 00:32:00.034 | 99.99th=[94897] 00:32:00.034 bw ( KiB/s): min=18432, max=29184, per=33.77%, avg=21191.11, stdev=3455.47, samples=9 00:32:00.034 iops : min= 144, max= 228, avg=165.56, stdev=27.00, samples=9 00:32:00.034 lat (msec) : 10=25.06%, 20=58.60%, 50=0.97%, 100=15.38% 00:32:00.034 cpu : usr=94.12%, sys=4.28%, ctx=364, majf=0, minf=120 00:32:00.034 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.034 issued rwts: total=826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.034 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:00.034 filename0: (groupid=0, jobs=1): err= 0: pid=306956: Thu Jul 25 07:38:06 2024 00:32:00.034 read: IOPS=128, BW=16.1MiB/s (16.9MB/s)(81.2MiB/5041msec) 00:32:00.034 slat (nsec): min=5394, max=32220, avg=7708.88, stdev=1601.64 00:32:00.034 clat (usec): min=7711, max=94792, avg=23249.15, stdev=19526.38 00:32:00.034 lat (usec): min=7720, max=94800, avg=23256.86, stdev=19526.56 00:32:00.034 clat percentiles (usec): 00:32:00.034 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9896], 00:32:00.034 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12518], 60.00th=[13829], 00:32:00.034 | 70.00th=[15270], 80.00th=[52691], 90.00th=[54789], 95.00th=[55313], 00:32:00.034 | 99.00th=[58459], 99.50th=[62129], 99.90th=[94897], 99.95th=[94897], 00:32:00.034 | 99.99th=[94897] 00:32:00.034 bw ( KiB/s): min=12288, max=21504, per=26.40%, avg=16565.80, stdev=3176.10, samples=10 00:32:00.034 iops : min= 96, max= 168, avg=129.40, stdev=24.84, samples=10 00:32:00.034 lat (msec) : 10=21.38%, 20=51.38%, 50=0.46%, 100=26.77% 00:32:00.034 cpu : usr=96.55%, sys=3.12%, ctx=10, majf=0, minf=80 00:32:00.034 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:00.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.034 issued rwts: total=650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.034 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:00.034 00:32:00.034 Run status group 0 (all jobs): 00:32:00.034 READ: bw=61.3MiB/s (64.2MB/s), 16.1MiB/s-24.7MiB/s (16.9MB/s-25.9MB/s), io=309MiB (324MB), run=5003-5041msec 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:00.034 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 bdev_null0 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 [2024-07-25 07:38:06.363103] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 bdev_null1 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 bdev_null2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:00.035 { 00:32:00.035 "params": { 00:32:00.035 "name": "Nvme$subsystem", 00:32:00.035 "trtype": "$TEST_TRANSPORT", 00:32:00.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.035 "adrfam": "ipv4", 00:32:00.035 "trsvcid": "$NVMF_PORT", 00:32:00.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.035 "hdgst": ${hdgst:-false}, 00:32:00.035 "ddgst": ${ddgst:-false} 00:32:00.035 }, 00:32:00.035 "method": "bdev_nvme_attach_controller" 00:32:00.035 } 00:32:00.035 EOF 00:32:00.035 )") 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:00.035 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:00.035 { 00:32:00.035 "params": { 00:32:00.035 "name": "Nvme$subsystem", 00:32:00.035 "trtype": "$TEST_TRANSPORT", 00:32:00.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.035 "adrfam": "ipv4", 00:32:00.035 "trsvcid": "$NVMF_PORT", 00:32:00.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.035 "hdgst": ${hdgst:-false}, 00:32:00.035 "ddgst": ${ddgst:-false} 00:32:00.035 }, 00:32:00.035 "method": "bdev_nvme_attach_controller" 00:32:00.035 } 00:32:00.035 EOF 00:32:00.035 )") 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:00.036 { 00:32:00.036 "params": { 00:32:00.036 "name": "Nvme$subsystem", 00:32:00.036 "trtype": "$TEST_TRANSPORT", 00:32:00.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.036 "adrfam": "ipv4", 00:32:00.036 "trsvcid": "$NVMF_PORT", 00:32:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.036 "hdgst": ${hdgst:-false}, 00:32:00.036 "ddgst": ${ddgst:-false} 00:32:00.036 }, 00:32:00.036 "method": "bdev_nvme_attach_controller" 00:32:00.036 } 00:32:00.036 EOF 00:32:00.036 )") 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:00.036 "params": { 00:32:00.036 "name": "Nvme0", 00:32:00.036 "trtype": "tcp", 00:32:00.036 "traddr": "10.0.0.2", 00:32:00.036 "adrfam": "ipv4", 00:32:00.036 "trsvcid": "4420", 00:32:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.036 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.036 "hdgst": false, 00:32:00.036 "ddgst": false 00:32:00.036 }, 00:32:00.036 "method": "bdev_nvme_attach_controller" 00:32:00.036 },{ 00:32:00.036 "params": { 00:32:00.036 "name": "Nvme1", 00:32:00.036 "trtype": "tcp", 00:32:00.036 "traddr": "10.0.0.2", 00:32:00.036 "adrfam": "ipv4", 00:32:00.036 "trsvcid": "4420", 00:32:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:00.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:00.036 "hdgst": false, 00:32:00.036 "ddgst": false 00:32:00.036 }, 00:32:00.036 "method": "bdev_nvme_attach_controller" 00:32:00.036 },{ 00:32:00.036 "params": { 00:32:00.036 "name": "Nvme2", 00:32:00.036 "trtype": "tcp", 00:32:00.036 "traddr": "10.0.0.2", 00:32:00.036 "adrfam": "ipv4", 00:32:00.036 "trsvcid": "4420", 00:32:00.036 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:00.036 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:00.036 "hdgst": false, 00:32:00.036 "ddgst": false 00:32:00.036 }, 00:32:00.036 "method": "bdev_nvme_attach_controller" 00:32:00.036 }' 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:00.036 07:38:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.036 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:00.036 ... 00:32:00.036 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:00.036 ... 00:32:00.036 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:00.036 ... 00:32:00.036 fio-3.35 00:32:00.036 Starting 24 threads 00:32:00.036 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.278 00:32:12.278 filename0: (groupid=0, jobs=1): err= 0: pid=308342: Thu Jul 25 07:38:17 2024 00:32:12.278 read: IOPS=505, BW=2023KiB/s (2071kB/s)(19.8MiB/10023msec) 00:32:12.278 slat (nsec): min=5543, max=78627, avg=12188.57, stdev=10240.79 00:32:12.278 clat (usec): min=2940, max=56250, avg=31548.30, stdev=6161.16 00:32:12.278 lat (usec): min=2977, max=56257, avg=31560.49, stdev=6162.05 00:32:12.278 clat percentiles (usec): 00:32:12.278 | 1.00th=[14615], 5.00th=[19268], 10.00th=[22676], 20.00th=[31065], 00:32:12.278 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.278 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34866], 95.00th=[41157], 00:32:12.278 | 99.00th=[49021], 99.50th=[52691], 99.90th=[56361], 99.95th=[56361], 00:32:12.278 | 99.99th=[56361] 00:32:12.278 bw ( KiB/s): min= 1872, max= 2480, per=4.35%, avg=2022.20, stdev=129.88, samples=20 00:32:12.278 iops : min= 468, max= 620, avg=505.55, stdev=32.47, samples=20 00:32:12.278 lat (msec) : 4=0.28%, 10=0.39%, 20=4.99%, 50=93.59%, 100=0.75% 00:32:12.278 cpu : usr=98.93%, sys=0.70%, ctx=46, majf=0, minf=50 00:32:12.278 IO depths : 1=2.1%, 2=4.2%, 4=13.7%, 8=68.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:32:12.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 complete : 0=0.0%, 4=91.8%, 8=3.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 issued rwts: total=5068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.278 filename0: (groupid=0, jobs=1): err= 0: pid=308343: Thu Jul 25 07:38:17 2024 00:32:12.278 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10015msec) 00:32:12.278 slat (nsec): min=5594, max=90012, avg=18760.08, stdev=13096.04 00:32:12.278 clat (usec): min=17533, max=50654, avg=32710.04, stdev=1611.18 00:32:12.278 lat (usec): min=17540, max=50660, avg=32728.80, stdev=1611.08 00:32:12.278 clat percentiles (usec): 00:32:12.278 | 1.00th=[29492], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.278 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:32:12.278 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:32:12.278 | 99.00th=[39060], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:32:12.278 | 99.99th=[50594] 00:32:12.278 bw ( KiB/s): min= 1916, max= 2048, per=4.20%, avg=1950.74, stdev=45.96, samples=19 00:32:12.278 iops : min= 479, max= 512, avg=487.68, stdev=11.49, samples=19 00:32:12.278 lat (msec) : 20=0.08%, 50=99.88%, 100=0.04% 00:32:12.278 cpu : usr=98.75%, sys=0.84%, ctx=76, majf=0, minf=36 00:32:12.278 IO depths : 1=1.8%, 2=3.8%, 4=12.7%, 8=70.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:32:12.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 complete : 0=0.0%, 4=90.7%, 8=3.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.278 filename0: (groupid=0, jobs=1): err= 0: pid=308344: Thu Jul 25 07:38:17 2024 00:32:12.278 read: IOPS=493, BW=1972KiB/s (2020kB/s)(19.3MiB/10020msec) 00:32:12.278 slat (nsec): min=5573, max=75468, avg=13287.07, stdev=10589.51 00:32:12.278 clat (usec): min=8885, max=35234, avg=32340.21, stdev=2182.58 00:32:12.278 lat (usec): min=8893, max=35241, avg=32353.50, stdev=2183.15 00:32:12.278 clat percentiles (usec): 00:32:12.278 | 1.00th=[18744], 5.00th=[31065], 10.00th=[31589], 20.00th=[32113], 00:32:12.278 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.278 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.278 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:32:12.278 | 99.99th=[35390] 00:32:12.278 bw ( KiB/s): min= 1916, max= 2152, per=4.24%, avg=1969.35, stdev=73.34, samples=20 00:32:12.278 iops : min= 479, max= 538, avg=492.30, stdev=18.30, samples=20 00:32:12.278 lat (msec) : 10=0.14%, 20=1.15%, 50=98.70% 00:32:12.278 cpu : usr=99.15%, sys=0.52%, ctx=69, majf=0, minf=33 00:32:12.278 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 issued rwts: total=4941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.278 filename0: (groupid=0, jobs=1): err= 0: pid=308345: Thu Jul 25 07:38:17 2024 00:32:12.278 read: IOPS=453, BW=1816KiB/s (1859kB/s)(17.7MiB/10004msec) 00:32:12.278 slat (nsec): min=5547, max=96209, avg=15162.47, stdev=14086.09 00:32:12.278 clat (usec): min=13783, max=70538, avg=35168.63, stdev=6509.68 00:32:12.278 lat (usec): min=13789, max=70562, avg=35183.79, stdev=6508.85 00:32:12.278 clat percentiles (usec): 00:32:12.278 | 1.00th=[19006], 5.00th=[28443], 10.00th=[31589], 20.00th=[32113], 00:32:12.278 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33424], 00:32:12.278 | 70.00th=[34341], 80.00th=[39584], 90.00th=[44827], 95.00th=[47973], 00:32:12.278 | 99.00th=[56886], 99.50th=[59507], 99.90th=[70779], 99.95th=[70779], 00:32:12.278 | 99.99th=[70779] 00:32:12.278 bw ( KiB/s): min= 1603, max= 1971, per=3.90%, avg=1813.89, stdev=92.46, samples=19 00:32:12.278 iops : min= 400, max= 492, avg=453.32, stdev=23.09, samples=19 00:32:12.278 lat (msec) : 20=1.26%, 50=95.75%, 100=2.99% 00:32:12.278 cpu : usr=99.04%, sys=0.65%, ctx=31, majf=0, minf=34 00:32:12.278 IO depths : 1=0.2%, 2=0.6%, 4=6.8%, 8=77.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:32:12.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 complete : 0=0.0%, 4=90.2%, 8=6.5%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 issued rwts: total=4541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.278 filename0: (groupid=0, jobs=1): err= 0: pid=308347: Thu Jul 25 07:38:17 2024 00:32:12.278 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10005msec) 00:32:12.278 slat (usec): min=5, max=113, avg=20.26, stdev=13.44 00:32:12.278 clat (usec): min=12752, max=76441, avg=32531.40, stdev=2809.72 00:32:12.278 lat (usec): min=12759, max=76461, avg=32551.65, stdev=2810.49 00:32:12.278 clat percentiles (usec): 00:32:12.278 | 1.00th=[21627], 5.00th=[30802], 10.00th=[31589], 20.00th=[31851], 00:32:12.278 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.278 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:32:12.278 | 99.00th=[39060], 99.50th=[51643], 99.90th=[55837], 99.95th=[55837], 00:32:12.278 | 99.99th=[76022] 00:32:12.278 bw ( KiB/s): min= 1872, max= 2052, per=4.20%, avg=1952.37, stdev=60.86, samples=19 00:32:12.278 iops : min= 468, max= 513, avg=488.05, stdev=15.24, samples=19 00:32:12.278 lat (msec) : 20=0.41%, 50=98.94%, 100=0.65% 00:32:12.278 cpu : usr=98.36%, sys=0.92%, ctx=313, majf=0, minf=32 00:32:12.278 IO depths : 1=5.8%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:12.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.278 filename0: (groupid=0, jobs=1): err= 0: pid=308348: Thu Jul 25 07:38:17 2024 00:32:12.278 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10010msec) 00:32:12.278 slat (nsec): min=5649, max=78167, avg=20439.04, stdev=13840.50 00:32:12.278 clat (usec): min=16081, max=37521, avg=32516.87, stdev=1255.01 00:32:12.278 lat (usec): min=16087, max=37543, avg=32537.31, stdev=1255.51 00:32:12.278 clat percentiles (usec): 00:32:12.278 | 1.00th=[30278], 5.00th=[31065], 10.00th=[31589], 20.00th=[32113], 00:32:12.278 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.278 | 70.00th=[32900], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:32:12.278 | 99.00th=[34866], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:32:12.278 | 99.99th=[37487] 00:32:12.278 bw ( KiB/s): min= 1916, max= 2048, per=4.20%, avg=1953.53, stdev=57.42, samples=19 00:32:12.278 iops : min= 479, max= 512, avg=488.26, stdev=14.34, samples=19 00:32:12.278 lat (msec) : 20=0.33%, 50=99.67% 00:32:12.278 cpu : usr=97.28%, sys=1.40%, ctx=123, majf=0, minf=27 00:32:12.278 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.278 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename0: (groupid=0, jobs=1): err= 0: pid=308349: Thu Jul 25 07:38:17 2024 00:32:12.279 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10029msec) 00:32:12.279 slat (nsec): min=5424, max=59903, avg=11546.90, stdev=7741.40 00:32:12.279 clat (usec): min=12784, max=55829, avg=32214.43, stdev=4131.22 00:32:12.279 lat (usec): min=12792, max=55835, avg=32225.98, stdev=4131.08 00:32:12.279 clat percentiles (usec): 00:32:12.279 | 1.00th=[16057], 5.00th=[24249], 10.00th=[30278], 20.00th=[31851], 00:32:12.279 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.279 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[36439], 00:32:12.279 | 99.00th=[45876], 99.50th=[49021], 99.90th=[55837], 99.95th=[55837], 00:32:12.279 | 99.99th=[55837] 00:32:12.279 bw ( KiB/s): min= 1916, max= 2144, per=4.26%, avg=1980.35, stdev=68.98, samples=20 00:32:12.279 iops : min= 479, max= 536, avg=495.05, stdev=17.21, samples=20 00:32:12.279 lat (msec) : 20=3.44%, 50=96.36%, 100=0.20% 00:32:12.279 cpu : usr=98.96%, sys=0.71%, ctx=23, majf=0, minf=50 00:32:12.279 IO depths : 1=2.0%, 2=4.0%, 4=11.9%, 8=69.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:32:12.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 complete : 0=0.0%, 4=91.4%, 8=5.0%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename0: (groupid=0, jobs=1): err= 0: pid=308351: Thu Jul 25 07:38:17 2024 00:32:12.279 read: IOPS=483, BW=1935KiB/s (1981kB/s)(18.9MiB/10010msec) 00:32:12.279 slat (nsec): min=5399, max=97631, avg=17752.10, stdev=14273.51 00:32:12.279 clat (usec): min=10427, max=54635, avg=32976.84, stdev=3045.61 00:32:12.279 lat (usec): min=10433, max=54657, avg=32994.59, stdev=3045.71 00:32:12.279 clat percentiles (usec): 00:32:12.279 | 1.00th=[23987], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.279 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.279 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:32:12.279 | 99.00th=[46400], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:32:12.279 | 99.99th=[54789] 00:32:12.279 bw ( KiB/s): min= 1792, max= 2004, per=4.16%, avg=1931.35, stdev=57.64, samples=20 00:32:12.279 iops : min= 448, max= 501, avg=482.65, stdev=14.41, samples=20 00:32:12.279 lat (msec) : 20=0.45%, 50=98.97%, 100=0.58% 00:32:12.279 cpu : usr=98.21%, sys=0.96%, ctx=21, majf=0, minf=33 00:32:12.279 IO depths : 1=1.2%, 2=2.4%, 4=6.2%, 8=74.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:32:12.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 complete : 0=0.0%, 4=90.6%, 8=7.8%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 issued rwts: total=4842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename1: (groupid=0, jobs=1): err= 0: pid=308353: Thu Jul 25 07:38:17 2024 00:32:12.279 read: IOPS=444, BW=1779KiB/s (1821kB/s)(17.4MiB/10005msec) 00:32:12.279 slat (nsec): min=5573, max=96053, avg=15387.04, stdev=13165.19 00:32:12.279 clat (usec): min=4694, max=63840, avg=35889.20, stdev=7084.95 00:32:12.279 lat (usec): min=4700, max=63846, avg=35904.59, stdev=7083.70 00:32:12.279 clat percentiles (usec): 00:32:12.279 | 1.00th=[20055], 5.00th=[26346], 10.00th=[31065], 20.00th=[32113], 00:32:12.279 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33817], 00:32:12.279 | 70.00th=[38536], 80.00th=[42206], 90.00th=[46400], 95.00th=[48497], 00:32:12.279 | 99.00th=[55837], 99.50th=[60031], 99.90th=[63177], 99.95th=[63177], 00:32:12.279 | 99.99th=[63701] 00:32:12.279 bw ( KiB/s): min= 1664, max= 1827, per=3.81%, avg=1772.63, stdev=49.10, samples=19 00:32:12.279 iops : min= 416, max= 456, avg=443.00, stdev=12.22, samples=19 00:32:12.279 lat (msec) : 10=0.22%, 20=0.70%, 50=95.53%, 100=3.55% 00:32:12.279 cpu : usr=98.78%, sys=0.80%, ctx=137, majf=0, minf=46 00:32:12.279 IO depths : 1=0.8%, 2=1.6%, 4=10.4%, 8=73.4%, 16=13.9%, 32=0.0%, >=64=0.0% 00:32:12.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 complete : 0=0.0%, 4=90.9%, 8=5.4%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 issued rwts: total=4449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename1: (groupid=0, jobs=1): err= 0: pid=308354: Thu Jul 25 07:38:17 2024 00:32:12.279 read: IOPS=490, BW=1961KiB/s (2009kB/s)(19.2MiB/10017msec) 00:32:12.279 slat (nsec): min=5220, max=89548, avg=15440.01, stdev=11818.91 00:32:12.279 clat (usec): min=19029, max=36471, avg=32497.48, stdev=1493.93 00:32:12.279 lat (usec): min=19036, max=36481, avg=32512.92, stdev=1494.10 00:32:12.279 clat percentiles (usec): 00:32:12.279 | 1.00th=[26870], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.279 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.279 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:32:12.279 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:32:12.279 | 99.99th=[36439] 00:32:12.279 bw ( KiB/s): min= 1920, max= 2048, per=4.22%, avg=1960.21, stdev=60.13, samples=19 00:32:12.279 iops : min= 480, max= 512, avg=489.89, stdev=14.97, samples=19 00:32:12.279 lat (msec) : 20=0.33%, 50=99.67% 00:32:12.279 cpu : usr=98.97%, sys=0.64%, ctx=151, majf=0, minf=35 00:32:12.279 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:12.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename1: (groupid=0, jobs=1): err= 0: pid=308355: Thu Jul 25 07:38:17 2024 00:32:12.279 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.5MiB/10025msec) 00:32:12.279 slat (usec): min=5, max=109, avg=20.48, stdev=15.36 00:32:12.279 clat (usec): min=17944, max=57427, avg=33762.13, stdev=5174.35 00:32:12.279 lat (usec): min=17960, max=57466, avg=33782.61, stdev=5172.64 00:32:12.279 clat percentiles (usec): 00:32:12.279 | 1.00th=[21627], 5.00th=[26870], 10.00th=[30802], 20.00th=[31851], 00:32:12.279 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:32:12.279 | 70.00th=[33162], 80.00th=[34341], 90.00th=[40109], 95.00th=[46400], 00:32:12.279 | 99.00th=[52167], 99.50th=[54264], 99.90th=[55837], 99.95th=[56361], 00:32:12.279 | 99.99th=[57410] 00:32:12.279 bw ( KiB/s): min= 1760, max= 2112, per=4.06%, avg=1885.15, stdev=82.73, samples=20 00:32:12.279 iops : min= 440, max= 528, avg=471.25, stdev=20.72, samples=20 00:32:12.279 lat (msec) : 20=0.51%, 50=97.42%, 100=2.07% 00:32:12.279 cpu : usr=98.84%, sys=0.80%, ctx=43, majf=0, minf=30 00:32:12.279 IO depths : 1=1.6%, 2=4.3%, 4=14.5%, 8=66.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:32:12.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 complete : 0=0.0%, 4=92.1%, 8=3.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 issued rwts: total=4729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename1: (groupid=0, jobs=1): err= 0: pid=308356: Thu Jul 25 07:38:17 2024 00:32:12.279 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10006msec) 00:32:12.279 slat (nsec): min=5539, max=99636, avg=16271.34, stdev=14382.99 00:32:12.279 clat (usec): min=8485, max=62527, avg=35639.06, stdev=6991.11 00:32:12.279 lat (usec): min=8491, max=62533, avg=35655.33, stdev=6990.06 00:32:12.279 clat percentiles (usec): 00:32:12.279 | 1.00th=[20317], 5.00th=[25297], 10.00th=[29754], 20.00th=[32113], 00:32:12.279 | 30.00th=[32375], 40.00th=[32900], 50.00th=[32900], 60.00th=[33817], 00:32:12.279 | 70.00th=[38011], 80.00th=[42206], 90.00th=[45876], 95.00th=[49021], 00:32:12.279 | 99.00th=[54789], 99.50th=[58459], 99.90th=[61604], 99.95th=[62653], 00:32:12.279 | 99.99th=[62653] 00:32:12.279 bw ( KiB/s): min= 1640, max= 1899, per=3.84%, avg=1785.95, stdev=60.20, samples=19 00:32:12.279 iops : min= 410, max= 474, avg=446.37, stdev=14.98, samples=19 00:32:12.279 lat (msec) : 10=0.22%, 20=0.69%, 50=95.54%, 100=3.55% 00:32:12.279 cpu : usr=99.03%, sys=0.68%, ctx=31, majf=0, minf=31 00:32:12.279 IO depths : 1=0.3%, 2=0.7%, 4=8.5%, 8=76.1%, 16=14.4%, 32=0.0%, >=64=0.0% 00:32:12.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 complete : 0=0.0%, 4=90.4%, 8=6.1%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 issued rwts: total=4481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename1: (groupid=0, jobs=1): err= 0: pid=308357: Thu Jul 25 07:38:17 2024 00:32:12.279 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10023msec) 00:32:12.279 slat (nsec): min=5406, max=78712, avg=13036.09, stdev=11383.90 00:32:12.279 clat (usec): min=14977, max=36151, avg=31810.03, stdev=3189.87 00:32:12.279 lat (usec): min=15011, max=36162, avg=31823.06, stdev=3190.75 00:32:12.279 clat percentiles (usec): 00:32:12.279 | 1.00th=[16450], 5.00th=[23725], 10.00th=[31065], 20.00th=[31851], 00:32:12.279 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.279 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.279 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:32:12.279 | 99.99th=[35914] 00:32:12.279 bw ( KiB/s): min= 1920, max= 2299, per=4.31%, avg=2002.70, stdev=103.16, samples=20 00:32:12.279 iops : min= 480, max= 574, avg=500.60, stdev=25.66, samples=20 00:32:12.279 lat (msec) : 20=3.18%, 50=96.82% 00:32:12.279 cpu : usr=97.69%, sys=1.41%, ctx=80, majf=0, minf=42 00:32:12.279 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.279 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.279 filename1: (groupid=0, jobs=1): err= 0: pid=308358: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=490, BW=1961KiB/s (2009kB/s)(19.2MiB/10017msec) 00:32:12.280 slat (usec): min=5, max=103, avg=14.22, stdev=12.34 00:32:12.280 clat (usec): min=9995, max=46501, avg=32513.77, stdev=1639.84 00:32:12.280 lat (usec): min=10003, max=46516, avg=32527.99, stdev=1640.04 00:32:12.280 clat percentiles (usec): 00:32:12.280 | 1.00th=[26870], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.280 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.280 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:32:12.280 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:32:12.280 | 99.99th=[46400] 00:32:12.280 bw ( KiB/s): min= 1920, max= 2048, per=4.22%, avg=1960.05, stdev=60.24, samples=19 00:32:12.280 iops : min= 480, max= 512, avg=489.89, stdev=14.97, samples=19 00:32:12.280 lat (msec) : 10=0.02%, 20=0.41%, 50=99.57% 00:32:12.280 cpu : usr=99.30%, sys=0.39%, ctx=12, majf=0, minf=44 00:32:12.280 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.280 filename1: (groupid=0, jobs=1): err= 0: pid=308359: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=483, BW=1935KiB/s (1981kB/s)(19.0MiB/10031msec) 00:32:12.280 slat (usec): min=5, max=109, avg=17.46, stdev=15.35 00:32:12.280 clat (usec): min=11580, max=57475, avg=32896.46, stdev=5011.13 00:32:12.280 lat (usec): min=11593, max=57486, avg=32913.92, stdev=5011.75 00:32:12.280 clat percentiles (usec): 00:32:12.280 | 1.00th=[16712], 5.00th=[24249], 10.00th=[29492], 20.00th=[31851], 00:32:12.280 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.280 | 70.00th=[33162], 80.00th=[33817], 90.00th=[37487], 95.00th=[42730], 00:32:12.280 | 99.00th=[50594], 99.50th=[51643], 99.90th=[56361], 99.95th=[57410], 00:32:12.280 | 99.99th=[57410] 00:32:12.280 bw ( KiB/s): min= 1792, max= 2096, per=4.17%, avg=1938.20, stdev=88.26, samples=20 00:32:12.280 iops : min= 448, max= 524, avg=484.55, stdev=22.07, samples=20 00:32:12.280 lat (msec) : 20=1.81%, 50=96.74%, 100=1.44% 00:32:12.280 cpu : usr=99.01%, sys=0.71%, ctx=10, majf=0, minf=33 00:32:12.280 IO depths : 1=3.4%, 2=6.8%, 4=15.7%, 8=63.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:32:12.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 complete : 0=0.0%, 4=92.1%, 8=3.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 issued rwts: total=4852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.280 filename1: (groupid=0, jobs=1): err= 0: pid=308360: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=487, BW=1952KiB/s (1998kB/s)(19.1MiB/10002msec) 00:32:12.280 slat (nsec): min=5601, max=79231, avg=15859.13, stdev=12940.46 00:32:12.280 clat (usec): min=25388, max=54569, avg=32641.45, stdev=1539.88 00:32:12.280 lat (usec): min=25396, max=54593, avg=32657.31, stdev=1539.55 00:32:12.280 clat percentiles (usec): 00:32:12.280 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.280 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.280 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.280 | 99.00th=[34866], 99.50th=[35914], 99.90th=[54789], 99.95th=[54789], 00:32:12.280 | 99.99th=[54789] 00:32:12.280 bw ( KiB/s): min= 1792, max= 2052, per=4.19%, avg=1947.63, stdev=68.17, samples=19 00:32:12.280 iops : min= 448, max= 513, avg=486.63, stdev=17.08, samples=19 00:32:12.280 lat (msec) : 50=99.67%, 100=0.33% 00:32:12.280 cpu : usr=99.27%, sys=0.45%, ctx=8, majf=0, minf=49 00:32:12.280 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:12.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.280 filename2: (groupid=0, jobs=1): err= 0: pid=308361: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10003msec) 00:32:12.280 slat (usec): min=5, max=112, avg=18.58, stdev=16.11 00:32:12.280 clat (usec): min=21705, max=48132, avg=32619.83, stdev=1021.64 00:32:12.280 lat (usec): min=21711, max=48139, avg=32638.41, stdev=1020.43 00:32:12.280 clat percentiles (usec): 00:32:12.280 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.280 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.280 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.280 | 99.00th=[34866], 99.50th=[36439], 99.90th=[40633], 99.95th=[41157], 00:32:12.280 | 99.99th=[47973] 00:32:12.280 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1953.21, stdev=71.68, samples=19 00:32:12.280 iops : min= 448, max= 512, avg=488.26, stdev=17.87, samples=19 00:32:12.280 lat (msec) : 50=100.00% 00:32:12.280 cpu : usr=98.90%, sys=0.75%, ctx=71, majf=0, minf=28 00:32:12.280 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.280 filename2: (groupid=0, jobs=1): err= 0: pid=308362: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=479, BW=1918KiB/s (1965kB/s)(18.8MiB/10008msec) 00:32:12.280 slat (usec): min=5, max=118, avg=17.24, stdev=14.14 00:32:12.280 clat (usec): min=13203, max=60630, avg=33253.52, stdev=4155.27 00:32:12.280 lat (usec): min=13209, max=60636, avg=33270.76, stdev=4153.89 00:32:12.280 clat percentiles (usec): 00:32:12.280 | 1.00th=[20841], 5.00th=[30540], 10.00th=[31327], 20.00th=[32113], 00:32:12.280 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:32:12.280 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34866], 95.00th=[41157], 00:32:12.280 | 99.00th=[52691], 99.50th=[55313], 99.90th=[60556], 99.95th=[60556], 00:32:12.280 | 99.99th=[60556] 00:32:12.280 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1914.55, stdev=73.07, samples=20 00:32:12.280 iops : min= 448, max= 512, avg=478.30, stdev=18.41, samples=20 00:32:12.280 lat (msec) : 20=0.71%, 50=98.02%, 100=1.27% 00:32:12.280 cpu : usr=99.19%, sys=0.50%, ctx=14, majf=0, minf=44 00:32:12.280 IO depths : 1=1.3%, 2=2.7%, 4=9.6%, 8=73.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:32:12.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 complete : 0=0.0%, 4=90.5%, 8=5.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.280 filename2: (groupid=0, jobs=1): err= 0: pid=308363: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10017msec) 00:32:12.280 slat (nsec): min=5613, max=99620, avg=18251.48, stdev=12970.07 00:32:12.280 clat (usec): min=15801, max=46529, avg=32371.13, stdev=1884.48 00:32:12.280 lat (usec): min=15812, max=46542, avg=32389.38, stdev=1885.36 00:32:12.280 clat percentiles (usec): 00:32:12.280 | 1.00th=[20841], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:32:12.280 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.280 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.280 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:32:12.280 | 99.99th=[46400] 00:32:12.280 bw ( KiB/s): min= 1916, max= 2048, per=4.22%, avg=1963.95, stdev=62.76, samples=20 00:32:12.280 iops : min= 479, max= 512, avg=490.95, stdev=15.64, samples=20 00:32:12.280 lat (msec) : 20=0.69%, 50=99.31% 00:32:12.280 cpu : usr=99.20%, sys=0.50%, ctx=9, majf=0, minf=54 00:32:12.280 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.280 filename2: (groupid=0, jobs=1): err= 0: pid=308364: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.0MiB/10001msec) 00:32:12.280 slat (nsec): min=5583, max=78340, avg=13756.36, stdev=11118.76 00:32:12.280 clat (usec): min=17228, max=68948, avg=32677.97, stdev=2741.81 00:32:12.280 lat (usec): min=17235, max=68971, avg=32691.72, stdev=2741.73 00:32:12.280 clat percentiles (usec): 00:32:12.280 | 1.00th=[26608], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.280 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.280 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:32:12.280 | 99.00th=[38011], 99.50th=[49546], 99.90th=[68682], 99.95th=[68682], 00:32:12.280 | 99.99th=[68682] 00:32:12.280 bw ( KiB/s): min= 1715, max= 2052, per=4.19%, avg=1945.79, stdev=80.18, samples=19 00:32:12.280 iops : min= 428, max= 513, avg=486.37, stdev=20.18, samples=19 00:32:12.280 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:32:12.280 cpu : usr=93.35%, sys=3.12%, ctx=255, majf=0, minf=38 00:32:12.280 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:12.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.280 issued rwts: total=4876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.280 filename2: (groupid=0, jobs=1): err= 0: pid=308365: Thu Jul 25 07:38:17 2024 00:32:12.280 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10016msec) 00:32:12.280 slat (nsec): min=5608, max=74432, avg=16453.35, stdev=11553.95 00:32:12.281 clat (usec): min=20983, max=38082, avg=32577.95, stdev=977.02 00:32:12.281 lat (usec): min=20992, max=38105, avg=32594.41, stdev=977.04 00:32:12.281 clat percentiles (usec): 00:32:12.281 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.281 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.281 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.281 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:32:12.281 | 99.99th=[38011] 00:32:12.281 bw ( KiB/s): min= 1916, max= 2048, per=4.20%, avg=1951.15, stdev=56.80, samples=20 00:32:12.281 iops : min= 479, max= 512, avg=487.75, stdev=14.14, samples=20 00:32:12.281 lat (msec) : 50=100.00% 00:32:12.281 cpu : usr=93.74%, sys=2.87%, ctx=334, majf=0, minf=40 00:32:12.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.281 filename2: (groupid=0, jobs=1): err= 0: pid=308366: Thu Jul 25 07:38:17 2024 00:32:12.281 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10023msec) 00:32:12.281 slat (nsec): min=5617, max=75050, avg=15024.54, stdev=11782.65 00:32:12.281 clat (usec): min=15565, max=35954, avg=32315.71, stdev=2136.36 00:32:12.281 lat (usec): min=15579, max=35964, avg=32330.73, stdev=2136.41 00:32:12.281 clat percentiles (usec): 00:32:12.281 | 1.00th=[18220], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:32:12.281 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:32:12.281 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.281 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:32:12.281 | 99.99th=[35914] 00:32:12.281 bw ( KiB/s): min= 1920, max= 2048, per=4.24%, avg=1970.70, stdev=63.72, samples=20 00:32:12.281 iops : min= 480, max= 512, avg=492.60, stdev=15.84, samples=20 00:32:12.281 lat (msec) : 20=1.21%, 50=98.79% 00:32:12.281 cpu : usr=99.19%, sys=0.52%, ctx=9, majf=0, minf=37 00:32:12.281 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.281 filename2: (groupid=0, jobs=1): err= 0: pid=308367: Thu Jul 25 07:38:17 2024 00:32:12.281 read: IOPS=506, BW=2024KiB/s (2073kB/s)(19.8MiB/10023msec) 00:32:12.281 slat (nsec): min=5619, max=66885, avg=11540.50, stdev=7209.25 00:32:12.281 clat (usec): min=15218, max=35248, avg=31516.44, stdev=3515.50 00:32:12.281 lat (usec): min=15235, max=35271, avg=31527.98, stdev=3516.16 00:32:12.281 clat percentiles (usec): 00:32:12.281 | 1.00th=[16712], 5.00th=[22414], 10.00th=[30278], 20.00th=[31589], 00:32:12.281 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.281 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.281 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:32:12.281 | 99.99th=[35390] 00:32:12.281 bw ( KiB/s): min= 1916, max= 2304, per=4.35%, avg=2021.95, stdev=121.92, samples=20 00:32:12.281 iops : min= 479, max= 576, avg=505.45, stdev=30.47, samples=20 00:32:12.281 lat (msec) : 20=3.75%, 50=96.25% 00:32:12.281 cpu : usr=95.90%, sys=2.01%, ctx=123, majf=0, minf=45 00:32:12.281 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:12.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.281 filename2: (groupid=0, jobs=1): err= 0: pid=308368: Thu Jul 25 07:38:17 2024 00:32:12.281 read: IOPS=487, BW=1952KiB/s (1998kB/s)(19.1MiB/10002msec) 00:32:12.281 slat (nsec): min=5636, max=95535, avg=18965.34, stdev=13591.45 00:32:12.281 clat (usec): min=17284, max=55898, avg=32624.85, stdev=1785.61 00:32:12.281 lat (usec): min=17290, max=55920, avg=32643.81, stdev=1785.29 00:32:12.281 clat percentiles (usec): 00:32:12.281 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[32113], 00:32:12.281 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:32:12.281 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:32:12.281 | 99.00th=[34866], 99.50th=[36439], 99.90th=[55837], 99.95th=[55837], 00:32:12.281 | 99.99th=[55837] 00:32:12.281 bw ( KiB/s): min= 1792, max= 2052, per=4.19%, avg=1947.32, stdev=69.21, samples=19 00:32:12.281 iops : min= 448, max= 513, avg=486.79, stdev=17.32, samples=19 00:32:12.281 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:32:12.281 cpu : usr=98.81%, sys=0.64%, ctx=19, majf=0, minf=29 00:32:12.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:12.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.281 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:12.281 00:32:12.281 Run status group 0 (all jobs): 00:32:12.281 READ: bw=45.4MiB/s (47.6MB/s), 1779KiB/s-2024KiB/s (1821kB/s-2073kB/s), io=455MiB (477MB), run=10001-10031msec 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 bdev_null0 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.281 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.282 [2024-07-25 07:38:17.865880] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.282 bdev_null1 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:12.282 { 00:32:12.282 "params": { 00:32:12.282 "name": "Nvme$subsystem", 00:32:12.282 "trtype": "$TEST_TRANSPORT", 00:32:12.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.282 "adrfam": "ipv4", 00:32:12.282 "trsvcid": "$NVMF_PORT", 00:32:12.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.282 "hdgst": ${hdgst:-false}, 00:32:12.282 "ddgst": ${ddgst:-false} 00:32:12.282 }, 00:32:12.282 "method": "bdev_nvme_attach_controller" 00:32:12.282 } 00:32:12.282 EOF 00:32:12.282 )") 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:12.282 { 00:32:12.282 "params": { 00:32:12.282 "name": "Nvme$subsystem", 00:32:12.282 "trtype": "$TEST_TRANSPORT", 00:32:12.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.282 "adrfam": "ipv4", 00:32:12.282 "trsvcid": "$NVMF_PORT", 00:32:12.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.282 "hdgst": ${hdgst:-false}, 00:32:12.282 "ddgst": ${ddgst:-false} 00:32:12.282 }, 00:32:12.282 "method": "bdev_nvme_attach_controller" 00:32:12.282 } 00:32:12.282 EOF 00:32:12.282 )") 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:12.282 "params": { 00:32:12.282 "name": "Nvme0", 00:32:12.282 "trtype": "tcp", 00:32:12.282 "traddr": "10.0.0.2", 00:32:12.282 "adrfam": "ipv4", 00:32:12.282 "trsvcid": "4420", 00:32:12.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:12.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:12.282 "hdgst": false, 00:32:12.282 "ddgst": false 00:32:12.282 }, 00:32:12.282 "method": "bdev_nvme_attach_controller" 00:32:12.282 },{ 00:32:12.282 "params": { 00:32:12.282 "name": "Nvme1", 00:32:12.282 "trtype": "tcp", 00:32:12.282 "traddr": "10.0.0.2", 00:32:12.282 "adrfam": "ipv4", 00:32:12.282 "trsvcid": "4420", 00:32:12.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:12.282 "hdgst": false, 00:32:12.282 "ddgst": false 00:32:12.282 }, 00:32:12.282 "method": "bdev_nvme_attach_controller" 00:32:12.282 }' 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:12.282 07:38:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:12.282 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:12.282 ... 00:32:12.282 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:12.282 ... 00:32:12.282 fio-3.35 00:32:12.282 Starting 4 threads 00:32:12.282 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.596 00:32:17.596 filename0: (groupid=0, jobs=1): err= 0: pid=310720: Thu Jul 25 07:38:24 2024 00:32:17.596 read: IOPS=2072, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5002msec) 00:32:17.596 slat (nsec): min=7819, max=34474, avg=8622.18, stdev=2190.41 00:32:17.596 clat (usec): min=1946, max=45017, avg=3836.24, stdev=1304.88 00:32:17.596 lat (usec): min=1955, max=45050, avg=3844.86, stdev=1305.03 00:32:17.596 clat percentiles (usec): 00:32:17.596 | 1.00th=[ 2474], 5.00th=[ 2868], 10.00th=[ 3032], 20.00th=[ 3261], 00:32:17.596 | 30.00th=[ 3490], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3916], 00:32:17.596 | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 4883], 00:32:17.596 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6587], 99.95th=[44827], 00:32:17.596 | 99.99th=[44827] 00:32:17.596 bw ( KiB/s): min=15041, max=17024, per=25.64%, avg=16542.33, stdev=585.67, samples=9 00:32:17.596 iops : min= 1880, max= 2128, avg=2067.78, stdev=73.25, samples=9 00:32:17.596 lat (msec) : 2=0.06%, 4=65.06%, 10=34.81%, 50=0.08% 00:32:17.596 cpu : usr=97.52%, sys=2.24%, ctx=3, majf=0, minf=32 00:32:17.596 IO depths : 1=0.2%, 2=1.1%, 4=69.0%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 issued rwts: total=10369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:17.596 filename0: (groupid=0, jobs=1): err= 0: pid=310721: Thu Jul 25 07:38:24 2024 00:32:17.596 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5002msec) 00:32:17.596 slat (nsec): min=7813, max=33305, avg=8556.50, stdev=2156.86 00:32:17.596 clat (usec): min=1881, max=6959, avg=3827.67, stdev=638.45 00:32:17.596 lat (usec): min=1889, max=6968, avg=3836.23, stdev=638.42 00:32:17.596 clat percentiles (usec): 00:32:17.596 | 1.00th=[ 2540], 5.00th=[ 2868], 10.00th=[ 3064], 20.00th=[ 3294], 00:32:17.596 | 30.00th=[ 3458], 40.00th=[ 3654], 50.00th=[ 3818], 60.00th=[ 3916], 00:32:17.596 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 4948], 00:32:17.596 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6325], 99.95th=[ 6783], 00:32:17.596 | 99.99th=[ 6980] 00:32:17.596 bw ( KiB/s): min=16192, max=16832, per=25.77%, avg=16627.56, stdev=196.25, samples=9 00:32:17.596 iops : min= 2024, max= 2104, avg=2078.44, stdev=24.53, samples=9 00:32:17.596 lat (msec) : 2=0.06%, 4=64.53%, 10=35.41% 00:32:17.596 cpu : usr=97.36%, sys=2.38%, ctx=7, majf=0, minf=43 00:32:17.596 IO depths : 1=0.2%, 2=1.1%, 4=68.2%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 issued rwts: total=10394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:17.596 filename1: (groupid=0, jobs=1): err= 0: pid=310723: Thu Jul 25 07:38:24 2024 00:32:17.596 read: IOPS=2102, BW=16.4MiB/s (17.2MB/s)(82.2MiB/5003msec) 00:32:17.596 slat (nsec): min=5361, max=31689, avg=5917.51, stdev=1545.31 00:32:17.596 clat (usec): min=1438, max=6238, avg=3789.02, stdev=624.05 00:32:17.596 lat (usec): min=1444, max=6244, avg=3794.93, stdev=624.00 00:32:17.596 clat percentiles (usec): 00:32:17.596 | 1.00th=[ 2442], 5.00th=[ 2802], 10.00th=[ 3032], 20.00th=[ 3261], 00:32:17.596 | 30.00th=[ 3458], 40.00th=[ 3621], 50.00th=[ 3785], 60.00th=[ 3916], 00:32:17.596 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4883], 00:32:17.596 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 5997], 99.95th=[ 6128], 00:32:17.596 | 99.99th=[ 6194] 00:32:17.596 bw ( KiB/s): min=16336, max=17104, per=26.10%, avg=16842.67, stdev=292.63, samples=9 00:32:17.596 iops : min= 2042, max= 2138, avg=2105.33, stdev=36.58, samples=9 00:32:17.596 lat (msec) : 2=0.06%, 4=66.87%, 10=33.07% 00:32:17.596 cpu : usr=96.90%, sys=2.84%, ctx=7, majf=0, minf=19 00:32:17.596 IO depths : 1=0.1%, 2=0.9%, 4=69.2%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 issued rwts: total=10519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:17.596 filename1: (groupid=0, jobs=1): err= 0: pid=310724: Thu Jul 25 07:38:24 2024 00:32:17.596 read: IOPS=1812, BW=14.2MiB/s (14.8MB/s)(70.8MiB/5002msec) 00:32:17.596 slat (nsec): min=5358, max=28898, avg=6031.30, stdev=1789.75 00:32:17.596 clat (usec): min=1861, max=9059, avg=4397.84, stdev=821.85 00:32:17.596 lat (usec): min=1867, max=9064, avg=4403.87, stdev=821.80 00:32:17.596 clat percentiles (usec): 00:32:17.596 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3752], 00:32:17.596 | 30.00th=[ 3916], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4490], 00:32:17.596 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5473], 95.00th=[ 5866], 00:32:17.596 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 8586], 00:32:17.596 | 99.99th=[ 9110] 00:32:17.596 bw ( KiB/s): min=14032, max=14880, per=22.46%, avg=14490.67, stdev=264.00, samples=9 00:32:17.596 iops : min= 1754, max= 1860, avg=1811.33, stdev=33.00, samples=9 00:32:17.596 lat (msec) : 2=0.04%, 4=34.31%, 10=65.65% 00:32:17.596 cpu : usr=97.12%, sys=2.66%, ctx=10, majf=0, minf=75 00:32:17.596 IO depths : 1=0.3%, 2=1.9%, 4=68.9%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:17.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.596 issued rwts: total=9065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:17.596 00:32:17.596 Run status group 0 (all jobs): 00:32:17.596 READ: bw=63.0MiB/s (66.1MB/s), 14.2MiB/s-16.4MiB/s (14.8MB/s-17.2MB/s), io=315MiB (331MB), run=5002-5003msec 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.596 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.596 00:32:17.596 real 0m24.060s 00:32:17.596 user 5m14.301s 00:32:17.597 sys 0m4.423s 00:32:17.597 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:17.597 07:38:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.597 ************************************ 00:32:17.597 END TEST fio_dif_rand_params 00:32:17.597 ************************************ 00:32:17.597 07:38:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:17.597 07:38:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:17.597 07:38:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:17.597 07:38:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.597 ************************************ 00:32:17.597 START TEST fio_dif_digest 00:32:17.597 ************************************ 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:17.597 bdev_null0 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:17.597 [2024-07-25 07:38:24.275984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:17.597 { 00:32:17.597 "params": { 00:32:17.597 "name": "Nvme$subsystem", 00:32:17.597 "trtype": "$TEST_TRANSPORT", 00:32:17.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.597 "adrfam": "ipv4", 00:32:17.597 "trsvcid": "$NVMF_PORT", 00:32:17.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.597 "hdgst": ${hdgst:-false}, 00:32:17.597 "ddgst": ${ddgst:-false} 00:32:17.597 }, 00:32:17.597 "method": "bdev_nvme_attach_controller" 00:32:17.597 } 00:32:17.597 EOF 00:32:17.597 )") 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:17.597 "params": { 00:32:17.597 "name": "Nvme0", 00:32:17.597 "trtype": "tcp", 00:32:17.597 "traddr": "10.0.0.2", 00:32:17.597 "adrfam": "ipv4", 00:32:17.597 "trsvcid": "4420", 00:32:17.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.597 "hdgst": true, 00:32:17.597 "ddgst": true 00:32:17.597 }, 00:32:17.597 "method": "bdev_nvme_attach_controller" 00:32:17.597 }' 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:17.597 07:38:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.597 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:17.597 ... 00:32:17.597 fio-3.35 00:32:17.597 Starting 3 threads 00:32:17.597 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.835 00:32:29.835 filename0: (groupid=0, jobs=1): err= 0: pid=312065: Thu Jul 25 07:38:35 2024 00:32:29.836 read: IOPS=148, BW=18.5MiB/s (19.4MB/s)(186MiB/10048msec) 00:32:29.836 slat (nsec): min=5608, max=58953, avg=7672.00, stdev=2199.55 00:32:29.836 clat (usec): min=7453, max=99741, avg=20189.36, stdev=17163.18 00:32:29.836 lat (usec): min=7459, max=99749, avg=20197.04, stdev=17163.29 00:32:29.836 clat percentiles (msec): 00:32:29.836 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:32:29.836 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:32:29.836 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 55], 95.00th=[ 56], 00:32:29.836 | 99.00th=[ 58], 99.50th=[ 95], 99.90th=[ 100], 99.95th=[ 101], 00:32:29.836 | 99.99th=[ 101] 00:32:29.836 bw ( KiB/s): min=10752, max=25600, per=36.86%, avg=19046.40, stdev=4541.20, samples=20 00:32:29.836 iops : min= 84, max= 200, avg=148.80, stdev=35.48, samples=20 00:32:29.836 lat (msec) : 10=13.22%, 20=68.72%, 50=0.07%, 100=17.99% 00:32:29.836 cpu : usr=96.83%, sys=2.88%, ctx=19, majf=0, minf=156 00:32:29.836 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.836 issued rwts: total=1490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:29.836 filename0: (groupid=0, jobs=1): err= 0: pid=312066: Thu Jul 25 07:38:35 2024 00:32:29.836 read: IOPS=133, BW=16.7MiB/s (17.5MB/s)(167MiB/10015msec) 00:32:29.836 slat (nsec): min=5579, max=31467, avg=7072.48, stdev=1370.24 00:32:29.836 clat (msec): min=7, max=135, avg=22.43, stdev=19.67 00:32:29.836 lat (msec): min=7, max=135, avg=22.43, stdev=19.67 00:32:29.836 clat percentiles (msec): 00:32:29.836 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:32:29.836 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:32:29.836 | 70.00th=[ 17], 80.00th=[ 52], 90.00th=[ 55], 95.00th=[ 57], 00:32:29.836 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 100], 99.95th=[ 136], 00:32:29.836 | 99.99th=[ 136] 00:32:29.836 bw ( KiB/s): min=10752, max=26112, per=33.10%, avg=17102.70, stdev=4577.83, samples=20 00:32:29.836 iops : min= 84, max= 204, avg=133.60, stdev=35.76, samples=20 00:32:29.836 lat (msec) : 10=11.80%, 20=67.21%, 50=0.07%, 100=20.84%, 250=0.07% 00:32:29.836 cpu : usr=97.13%, sys=2.62%, ctx=21, majf=0, minf=204 00:32:29.836 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.836 issued rwts: total=1339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:29.836 filename0: (groupid=0, jobs=1): err= 0: pid=312067: Thu Jul 25 07:38:35 2024 00:32:29.836 read: IOPS=122, BW=15.3MiB/s (16.0MB/s)(153MiB/10022msec) 00:32:29.836 slat (nsec): min=5762, max=60428, avg=9292.40, stdev=2094.11 00:32:29.836 clat (usec): min=7099, max=97996, avg=24486.55, stdev=20231.40 00:32:29.836 lat (usec): min=7108, max=98005, avg=24495.84, stdev=20231.38 00:32:29.836 clat percentiles (usec): 00:32:29.836 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11731], 00:32:29.836 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14353], 60.00th=[15270], 00:32:29.836 | 70.00th=[16712], 80.00th=[53216], 90.00th=[55313], 95.00th=[56886], 00:32:29.836 | 99.00th=[94897], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:32:29.836 | 99.99th=[98042] 00:32:29.836 bw ( KiB/s): min= 9216, max=20992, per=30.32%, avg=15667.20, stdev=3651.31, samples=20 00:32:29.836 iops : min= 72, max= 164, avg=122.40, stdev=28.53, samples=20 00:32:29.836 lat (msec) : 10=7.99%, 20=66.50%, 50=0.08%, 100=25.43% 00:32:29.836 cpu : usr=96.62%, sys=3.09%, ctx=21, majf=0, minf=97 00:32:29.836 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.836 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:29.836 00:32:29.836 Run status group 0 (all jobs): 00:32:29.836 READ: bw=50.5MiB/s (52.9MB/s), 15.3MiB/s-18.5MiB/s (16.0MB/s-19.4MB/s), io=507MiB (532MB), run=10015-10048msec 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.836 00:32:29.836 real 0m11.090s 00:32:29.836 user 0m45.998s 00:32:29.836 sys 0m1.195s 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:29.836 07:38:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:29.836 ************************************ 00:32:29.836 END TEST fio_dif_digest 00:32:29.836 ************************************ 00:32:29.836 07:38:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:29.836 07:38:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:29.836 rmmod nvme_tcp 00:32:29.836 rmmod nvme_fabrics 00:32:29.836 rmmod nvme_keyring 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 301753 ']' 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 301753 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 301753 ']' 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 301753 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 301753 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 301753' 00:32:29.836 killing process with pid 301753 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@969 -- # kill 301753 00:32:29.836 07:38:35 nvmf_dif -- common/autotest_common.sh@974 -- # wait 301753 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:29.836 07:38:35 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:31.223 Waiting for block devices as requested 00:32:31.223 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:31.223 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:31.223 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:31.223 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:31.485 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:31.485 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:31.485 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:31.746 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:31.746 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:32.007 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:32.007 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:32.007 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:32.007 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:32.268 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:32.268 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:32.268 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:32.529 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:32.791 07:38:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.791 07:38:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.791 07:38:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.791 07:38:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.791 07:38:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.791 07:38:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:32.791 07:38:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.709 07:38:41 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:34.709 00:32:34.709 real 1m16.711s 00:32:34.709 user 8m6.211s 00:32:34.709 sys 0m19.300s 00:32:34.709 07:38:41 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:34.709 07:38:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:34.709 ************************************ 00:32:34.709 END TEST nvmf_dif 00:32:34.709 ************************************ 00:32:34.709 07:38:42 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:34.709 07:38:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:34.709 07:38:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:34.709 07:38:42 -- common/autotest_common.sh@10 -- # set +x 00:32:34.971 ************************************ 00:32:34.971 START TEST nvmf_abort_qd_sizes 00:32:34.971 ************************************ 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:34.971 * Looking for test storage... 00:32:34.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:34.971 07:38:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:41.634 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:41.634 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.634 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:41.635 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:41.635 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:41.635 07:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.896 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.896 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.896 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:41.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:32:41.897 00:32:41.897 --- 10.0.0.2 ping statistics --- 00:32:41.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.897 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:32:41.897 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.479 ms 00:32:41.897 00:32:41.897 --- 10.0.0.1 ping statistics --- 00:32:41.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.897 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:32:41.897 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.897 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:41.897 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:41.897 07:38:49 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:45.199 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:45.199 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:45.460 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:45.460 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.721 07:38:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=321221 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 321221 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 321221 ']' 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:45.721 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:45.721 [2024-07-25 07:38:53.065059] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:32:45.721 [2024-07-25 07:38:53.065125] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.982 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.982 [2024-07-25 07:38:53.136674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.982 [2024-07-25 07:38:53.213480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.982 [2024-07-25 07:38:53.213523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.982 [2024-07-25 07:38:53.213531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.982 [2024-07-25 07:38:53.213537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.982 [2024-07-25 07:38:53.213543] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.982 [2024-07-25 07:38:53.213683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.982 [2024-07-25 07:38:53.213865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.982 [2024-07-25 07:38:53.214011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.982 [2024-07-25 07:38:53.214013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:46.555 07:38:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.816 ************************************ 00:32:46.816 START TEST spdk_target_abort 00:32:46.816 ************************************ 00:32:46.816 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:32:46.816 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:46.816 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:46.816 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.816 07:38:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:47.077 spdk_targetn1 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:47.078 [2024-07-25 07:38:54.237332] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:47.078 [2024-07-25 07:38:54.277637] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:47.078 07:38:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.078 EAL: No free 2048 kB hugepages reported on node 1 00:32:47.339 [2024-07-25 07:38:54.450672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:512 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:47.339 [2024-07-25 07:38:54.450698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:32:47.339 [2024-07-25 07:38:54.552644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2544 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:32:47.339 [2024-07-25 07:38:54.552662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:50.643 Initializing NVMe Controllers 00:32:50.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:50.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:50.643 Initialization complete. Launching workers. 00:32:50.643 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7810, failed: 2 00:32:50.643 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2847, failed to submit 4965 00:32:50.643 success 756, unsuccess 2091, failed 0 00:32:50.643 07:38:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:50.643 07:38:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:50.643 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.643 [2024-07-25 07:38:57.707362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:480 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:32:50.643 [2024-07-25 07:38:57.707399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:32:50.643 [2024-07-25 07:38:57.723336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:776 len:8 PRP1 0x200007c44000 PRP2 0x0 00:32:50.643 [2024-07-25 07:38:57.723359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:32:53.946 [2024-07-25 07:39:00.595286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:67424 len:8 PRP1 0x200007c62000 PRP2 0x0 00:32:53.946 [2024-07-25 07:39:00.595335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00f2 p:1 m:0 dnr:0 00:32:53.946 Initializing NVMe Controllers 00:32:53.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:53.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:53.946 Initialization complete. Launching workers. 00:32:53.946 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8714, failed: 3 00:32:53.946 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7501 00:32:53.946 success 347, unsuccess 869, failed 0 00:32:53.946 07:39:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.946 07:39:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.946 EAL: No free 2048 kB hugepages reported on node 1 00:32:55.332 [2024-07-25 07:39:02.353457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:163 nsid:1 lba:142296 len:8 PRP1 0x2000078f4000 PRP2 0x0 00:32:55.332 [2024-07-25 07:39:02.353486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:163 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:55.333 [2024-07-25 07:39:02.479303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:149 nsid:1 lba:155488 len:8 PRP1 0x2000078cc000 PRP2 0x0 00:32:55.333 [2024-07-25 07:39:02.479323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:149 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:32:55.333 [2024-07-25 07:39:02.586013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:152 nsid:1 lba:166960 len:8 PRP1 0x200007918000 PRP2 0x0 00:32:55.333 [2024-07-25 07:39:02.586030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:152 cdw0:0 sqhd:0022 p:1 m:0 dnr:0 00:32:56.717 Initializing NVMe Controllers 00:32:56.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.717 Initialization complete. Launching workers. 00:32:56.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40347, failed: 3 00:32:56.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2623, failed to submit 37727 00:32:56.717 success 650, unsuccess 1973, failed 0 00:32:56.717 07:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:56.717 07:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.717 07:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.717 07:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.717 07:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:56.717 07:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.717 07:39:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 321221 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 321221 ']' 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 321221 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321221 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321221' 00:32:58.627 killing process with pid 321221 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 321221 00:32:58.627 07:39:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 321221 00:32:58.889 00:32:58.889 real 0m12.183s 00:32:58.889 user 0m49.376s 00:32:58.889 sys 0m1.879s 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.889 ************************************ 00:32:58.889 END TEST spdk_target_abort 00:32:58.889 ************************************ 00:32:58.889 07:39:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:58.889 07:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:58.889 07:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:58.889 07:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:58.889 ************************************ 00:32:58.889 START TEST kernel_target_abort 00:32:58.889 ************************************ 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:58.889 07:39:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:02.197 Waiting for block devices as requested 00:33:02.197 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:02.459 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:02.459 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:02.459 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:02.720 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:02.720 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:02.720 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:02.982 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:02.982 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:03.243 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:03.243 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:03.243 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:03.505 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:03.505 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:03.505 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:03.505 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:03.799 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:04.072 No valid GPT data, bailing 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:04.072 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:04.072 00:33:04.072 Discovery Log Number of Records 2, Generation counter 2 00:33:04.073 =====Discovery Log Entry 0====== 00:33:04.073 trtype: tcp 00:33:04.073 adrfam: ipv4 00:33:04.073 subtype: current discovery subsystem 00:33:04.073 treq: not specified, sq flow control disable supported 00:33:04.073 portid: 1 00:33:04.073 trsvcid: 4420 00:33:04.073 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:04.073 traddr: 10.0.0.1 00:33:04.073 eflags: none 00:33:04.073 sectype: none 00:33:04.073 =====Discovery Log Entry 1====== 00:33:04.073 trtype: tcp 00:33:04.073 adrfam: ipv4 00:33:04.073 subtype: nvme subsystem 00:33:04.073 treq: not specified, sq flow control disable supported 00:33:04.073 portid: 1 00:33:04.073 trsvcid: 4420 00:33:04.073 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:04.073 traddr: 10.0.0.1 00:33:04.073 eflags: none 00:33:04.073 sectype: none 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:04.073 07:39:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:04.073 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.375 Initializing NVMe Controllers 00:33:07.375 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:07.375 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:07.375 Initialization complete. Launching workers. 00:33:07.375 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39625, failed: 0 00:33:07.375 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39625, failed to submit 0 00:33:07.375 success 0, unsuccess 39625, failed 0 00:33:07.375 07:39:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:07.375 07:39:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:07.375 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.675 Initializing NVMe Controllers 00:33:10.675 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:10.675 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:10.675 Initialization complete. Launching workers. 00:33:10.675 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79451, failed: 0 00:33:10.675 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20002, failed to submit 59449 00:33:10.675 success 0, unsuccess 20002, failed 0 00:33:10.675 07:39:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:10.675 07:39:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:10.675 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.977 Initializing NVMe Controllers 00:33:13.977 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:13.977 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:13.977 Initialization complete. Launching workers. 00:33:13.977 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76752, failed: 0 00:33:13.977 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19158, failed to submit 57594 00:33:13.977 success 0, unsuccess 19158, failed 0 00:33:13.977 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:13.978 07:39:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:17.279 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:17.279 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:18.665 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:19.236 00:33:19.237 real 0m20.130s 00:33:19.237 user 0m7.320s 00:33:19.237 sys 0m6.618s 00:33:19.237 07:39:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:19.237 07:39:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:19.237 ************************************ 00:33:19.237 END TEST kernel_target_abort 00:33:19.237 ************************************ 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:19.237 rmmod nvme_tcp 00:33:19.237 rmmod nvme_fabrics 00:33:19.237 rmmod nvme_keyring 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 321221 ']' 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 321221 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 321221 ']' 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 321221 00:33:19.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (321221) - No such process 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 321221 is not found' 00:33:19.237 Process with pid 321221 is not found 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:19.237 07:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:22.541 Waiting for block devices as requested 00:33:22.541 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:22.541 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:22.541 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:22.802 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:22.802 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:22.802 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:23.062 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:23.062 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:23.062 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:23.323 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:23.324 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:23.324 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:23.585 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:23.585 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:23.585 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:23.845 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:23.845 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:24.107 07:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:24.107 07:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:24.107 07:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.107 07:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:24.107 07:39:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.107 07:39:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.107 07:39:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.024 07:39:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:26.024 00:33:26.024 real 0m51.293s 00:33:26.024 user 1m1.657s 00:33:26.024 sys 0m19.108s 00:33:26.024 07:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:26.024 07:39:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.024 ************************************ 00:33:26.024 END TEST nvmf_abort_qd_sizes 00:33:26.024 ************************************ 00:33:26.286 07:39:33 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:26.286 07:39:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:26.286 07:39:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:26.286 07:39:33 -- common/autotest_common.sh@10 -- # set +x 00:33:26.286 ************************************ 00:33:26.286 START TEST keyring_file 00:33:26.286 ************************************ 00:33:26.286 07:39:33 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:26.286 * Looking for test storage... 00:33:26.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:26.286 07:39:33 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:26.286 07:39:33 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.286 07:39:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.287 07:39:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.287 07:39:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.287 07:39:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.287 07:39:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.287 07:39:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.287 07:39:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.287 07:39:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:26.287 07:39:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wqTCnvTixw 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wqTCnvTixw 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wqTCnvTixw 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wqTCnvTixw 00:33:26.287 07:39:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZjDoAbmfy0 00:33:26.287 07:39:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:26.287 07:39:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:26.548 07:39:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZjDoAbmfy0 00:33:26.548 07:39:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZjDoAbmfy0 00:33:26.548 07:39:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZjDoAbmfy0 00:33:26.548 07:39:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=332116 00:33:26.548 07:39:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 332116 00:33:26.548 07:39:33 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:26.548 07:39:33 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 332116 ']' 00:33:26.548 07:39:33 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.548 07:39:33 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:26.548 07:39:33 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.548 07:39:33 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:26.549 07:39:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:26.549 [2024-07-25 07:39:33.759150] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:33:26.549 [2024-07-25 07:39:33.759237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332116 ] 00:33:26.549 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.549 [2024-07-25 07:39:33.823664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.549 [2024-07-25 07:39:33.899394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:27.492 07:39:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.492 [2024-07-25 07:39:34.523059] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.492 null0 00:33:27.492 [2024-07-25 07:39:34.555109] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:27.492 [2024-07-25 07:39:34.555370] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:27.492 [2024-07-25 07:39:34.563114] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.492 07:39:34 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.492 [2024-07-25 07:39:34.579157] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:27.492 request: 00:33:27.492 { 00:33:27.492 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.492 "secure_channel": false, 00:33:27.492 "listen_address": { 00:33:27.492 "trtype": "tcp", 00:33:27.492 "traddr": "127.0.0.1", 00:33:27.492 "trsvcid": "4420" 00:33:27.492 }, 00:33:27.492 "method": "nvmf_subsystem_add_listener", 00:33:27.492 "req_id": 1 00:33:27.492 } 00:33:27.492 Got JSON-RPC error response 00:33:27.492 response: 00:33:27.492 { 00:33:27.492 "code": -32602, 00:33:27.492 "message": "Invalid parameters" 00:33:27.492 } 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:27.492 07:39:34 keyring_file -- keyring/file.sh@46 -- # bperfpid=332255 00:33:27.492 07:39:34 keyring_file -- keyring/file.sh@48 -- # waitforlisten 332255 /var/tmp/bperf.sock 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 332255 ']' 00:33:27.492 07:39:34 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:27.492 07:39:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:27.493 07:39:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:27.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:27.493 07:39:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:27.493 07:39:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.493 [2024-07-25 07:39:34.635878] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:33:27.493 [2024-07-25 07:39:34.635927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332255 ] 00:33:27.493 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.493 [2024-07-25 07:39:34.710823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.493 [2024-07-25 07:39:34.775294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.065 07:39:35 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:28.065 07:39:35 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:28.065 07:39:35 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:28.065 07:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:28.327 07:39:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZjDoAbmfy0 00:33:28.327 07:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZjDoAbmfy0 00:33:28.587 07:39:35 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:28.587 07:39:35 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:28.587 07:39:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.587 07:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.587 07:39:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.587 07:39:35 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.wqTCnvTixw == \/\t\m\p\/\t\m\p\.\w\q\T\C\n\v\T\i\x\w ]] 00:33:28.587 07:39:35 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:28.587 07:39:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:28.587 07:39:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.587 07:39:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.587 07:39:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:28.848 07:39:36 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ZjDoAbmfy0 == \/\t\m\p\/\t\m\p\.\Z\j\D\o\A\b\m\f\y\0 ]] 00:33:28.848 07:39:36 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.848 07:39:36 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:28.848 07:39:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:28.848 07:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.152 07:39:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:29.152 07:39:36 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:29.152 07:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:29.152 [2024-07-25 07:39:36.488245] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:29.416 nvme0n1 00:33:29.416 07:39:36 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.416 07:39:36 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:29.416 07:39:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:29.416 07:39:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.678 07:39:36 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:29.678 07:39:36 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.678 Running I/O for 1 seconds... 00:33:31.063 00:33:31.063 Latency(us) 00:33:31.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.063 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:31.063 nvme0n1 : 1.02 5057.67 19.76 0.00 0.00 25041.77 6307.84 63351.47 00:33:31.063 =================================================================================================================== 00:33:31.063 Total : 5057.67 19.76 0.00 0.00 25041.77 6307.84 63351.47 00:33:31.063 0 00:33:31.063 07:39:38 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:31.063 07:39:38 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:31.063 07:39:38 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:31.063 07:39:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.063 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:31.324 07:39:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:31.324 07:39:38 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.324 07:39:38 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:31.324 07:39:38 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.324 07:39:38 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:31.324 07:39:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:31.324 07:39:38 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:31.324 07:39:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:31.324 07:39:38 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.324 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:31.324 [2024-07-25 07:39:38.687710] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:31.324 [2024-07-25 07:39:38.688346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61f220 (107): Transport endpoint is not connected 00:33:31.324 [2024-07-25 07:39:38.689342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61f220 (9): Bad file descriptor 00:33:31.324 [2024-07-25 07:39:38.690343] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:31.324 [2024-07-25 07:39:38.690350] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:31.324 [2024-07-25 07:39:38.690356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:31.585 request: 00:33:31.585 { 00:33:31.585 "name": "nvme0", 00:33:31.585 "trtype": "tcp", 00:33:31.585 "traddr": "127.0.0.1", 00:33:31.585 "adrfam": "ipv4", 00:33:31.585 "trsvcid": "4420", 00:33:31.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:31.585 "prchk_reftag": false, 00:33:31.585 "prchk_guard": false, 00:33:31.585 "hdgst": false, 00:33:31.585 "ddgst": false, 00:33:31.585 "psk": "key1", 00:33:31.585 "method": "bdev_nvme_attach_controller", 00:33:31.585 "req_id": 1 00:33:31.585 } 00:33:31.585 Got JSON-RPC error response 00:33:31.585 response: 00:33:31.585 { 00:33:31.585 "code": -5, 00:33:31.585 "message": "Input/output error" 00:33:31.585 } 00:33:31.585 07:39:38 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:31.585 07:39:38 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:31.585 07:39:38 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:31.585 07:39:38 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:31.585 07:39:38 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:31.585 07:39:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:31.585 07:39:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.585 07:39:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:31.846 07:39:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:31.846 07:39:39 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:31.846 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:31.846 07:39:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:31.846 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:32.107 07:39:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:32.107 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.107 07:39:39 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:32.368 07:39:39 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:32.368 07:39:39 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.wqTCnvTixw 00:33:32.368 07:39:39 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:32.368 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:32.368 [2024-07-25 07:39:39.648134] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wqTCnvTixw': 0100660 00:33:32.368 [2024-07-25 07:39:39.648153] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:32.368 request: 00:33:32.368 { 00:33:32.368 "name": "key0", 00:33:32.368 "path": "/tmp/tmp.wqTCnvTixw", 00:33:32.368 "method": "keyring_file_add_key", 00:33:32.368 "req_id": 1 00:33:32.368 } 00:33:32.368 Got JSON-RPC error response 00:33:32.368 response: 00:33:32.368 { 00:33:32.368 "code": -1, 00:33:32.368 "message": "Operation not permitted" 00:33:32.368 } 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:32.368 07:39:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:32.369 07:39:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:32.369 07:39:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:32.369 07:39:39 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.wqTCnvTixw 00:33:32.369 07:39:39 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:32.369 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wqTCnvTixw 00:33:32.629 07:39:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.wqTCnvTixw 00:33:32.629 07:39:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:32.629 07:39:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:32.629 07:39:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.629 07:39:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.629 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.629 07:39:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.629 07:39:39 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:32.629 07:39:39 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.629 07:39:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:32.629 07:39:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.629 07:39:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:32.629 07:39:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.629 07:39:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:32.629 07:39:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.629 07:39:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.629 07:39:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.890 [2024-07-25 07:39:40.137395] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wqTCnvTixw': No such file or directory 00:33:32.890 [2024-07-25 07:39:40.137427] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:32.890 [2024-07-25 07:39:40.137444] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:32.890 [2024-07-25 07:39:40.137449] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:32.890 [2024-07-25 07:39:40.137454] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:32.890 request: 00:33:32.890 { 00:33:32.890 "name": "nvme0", 00:33:32.890 "trtype": "tcp", 00:33:32.890 "traddr": "127.0.0.1", 00:33:32.890 "adrfam": "ipv4", 00:33:32.890 "trsvcid": "4420", 00:33:32.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.891 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.891 "prchk_reftag": false, 00:33:32.891 "prchk_guard": false, 00:33:32.891 "hdgst": false, 00:33:32.891 "ddgst": false, 00:33:32.891 "psk": "key0", 00:33:32.891 "method": "bdev_nvme_attach_controller", 00:33:32.891 "req_id": 1 00:33:32.891 } 00:33:32.891 Got JSON-RPC error response 00:33:32.891 response: 00:33:32.891 { 00:33:32.891 "code": -19, 00:33:32.891 "message": "No such device" 00:33:32.891 } 00:33:32.891 07:39:40 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:32.891 07:39:40 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:32.891 07:39:40 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:32.891 07:39:40 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:32.891 07:39:40 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:32.891 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:33.152 07:39:40 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lHKi06kGqD 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:33.152 07:39:40 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:33.152 07:39:40 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:33.152 07:39:40 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:33.152 07:39:40 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:33.152 07:39:40 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:33.152 07:39:40 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lHKi06kGqD 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lHKi06kGqD 00:33:33.152 07:39:40 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.lHKi06kGqD 00:33:33.152 07:39:40 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHKi06kGqD 00:33:33.152 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lHKi06kGqD 00:33:33.413 07:39:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.413 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.413 nvme0n1 00:33:33.413 07:39:40 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:33.413 07:39:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.413 07:39:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.413 07:39:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.413 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.413 07:39:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.674 07:39:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:33.674 07:39:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:33.674 07:39:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:33.934 07:39:41 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:33.934 07:39:41 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.934 07:39:41 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:33.934 07:39:41 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.934 07:39:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:34.194 07:39:41 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:34.194 07:39:41 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:34.194 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:34.455 07:39:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:34.455 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.455 07:39:41 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:34.455 07:39:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:34.455 07:39:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lHKi06kGqD 00:33:34.455 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lHKi06kGqD 00:33:34.715 07:39:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZjDoAbmfy0 00:33:34.715 07:39:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZjDoAbmfy0 00:33:34.715 07:39:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.715 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.976 nvme0n1 00:33:34.976 07:39:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:34.976 07:39:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:35.237 07:39:42 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:35.237 "subsystems": [ 00:33:35.237 { 00:33:35.237 "subsystem": "keyring", 00:33:35.237 "config": [ 00:33:35.237 { 00:33:35.237 "method": "keyring_file_add_key", 00:33:35.237 "params": { 00:33:35.237 "name": "key0", 00:33:35.237 "path": "/tmp/tmp.lHKi06kGqD" 00:33:35.237 } 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "method": "keyring_file_add_key", 00:33:35.237 "params": { 00:33:35.237 "name": "key1", 00:33:35.237 "path": "/tmp/tmp.ZjDoAbmfy0" 00:33:35.237 } 00:33:35.237 } 00:33:35.237 ] 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "subsystem": "iobuf", 00:33:35.237 "config": [ 00:33:35.237 { 00:33:35.237 "method": "iobuf_set_options", 00:33:35.237 "params": { 00:33:35.237 "small_pool_count": 8192, 00:33:35.237 "large_pool_count": 1024, 00:33:35.237 "small_bufsize": 8192, 00:33:35.237 "large_bufsize": 135168 00:33:35.237 } 00:33:35.237 } 00:33:35.237 ] 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "subsystem": "sock", 00:33:35.237 "config": [ 00:33:35.237 { 00:33:35.237 "method": "sock_set_default_impl", 00:33:35.237 "params": { 00:33:35.237 "impl_name": "posix" 00:33:35.237 } 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "method": "sock_impl_set_options", 00:33:35.237 "params": { 00:33:35.237 "impl_name": "ssl", 00:33:35.237 "recv_buf_size": 4096, 00:33:35.237 "send_buf_size": 4096, 00:33:35.237 "enable_recv_pipe": true, 00:33:35.237 "enable_quickack": false, 00:33:35.237 "enable_placement_id": 0, 00:33:35.237 "enable_zerocopy_send_server": true, 00:33:35.237 "enable_zerocopy_send_client": false, 00:33:35.237 "zerocopy_threshold": 0, 00:33:35.237 "tls_version": 0, 00:33:35.237 "enable_ktls": false 00:33:35.237 } 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "method": "sock_impl_set_options", 00:33:35.237 "params": { 00:33:35.237 "impl_name": "posix", 00:33:35.237 "recv_buf_size": 2097152, 00:33:35.237 "send_buf_size": 2097152, 00:33:35.237 "enable_recv_pipe": true, 00:33:35.237 "enable_quickack": false, 00:33:35.237 "enable_placement_id": 0, 00:33:35.237 "enable_zerocopy_send_server": true, 00:33:35.237 "enable_zerocopy_send_client": false, 00:33:35.237 "zerocopy_threshold": 0, 00:33:35.237 "tls_version": 0, 00:33:35.237 "enable_ktls": false 00:33:35.237 } 00:33:35.237 } 00:33:35.237 ] 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "subsystem": "vmd", 00:33:35.237 "config": [] 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "subsystem": "accel", 00:33:35.237 "config": [ 00:33:35.237 { 00:33:35.237 "method": "accel_set_options", 00:33:35.237 "params": { 00:33:35.237 "small_cache_size": 128, 00:33:35.237 "large_cache_size": 16, 00:33:35.237 "task_count": 2048, 00:33:35.237 "sequence_count": 2048, 00:33:35.237 "buf_count": 2048 00:33:35.237 } 00:33:35.237 } 00:33:35.237 ] 00:33:35.237 }, 00:33:35.237 { 00:33:35.237 "subsystem": "bdev", 00:33:35.237 "config": [ 00:33:35.237 { 00:33:35.237 "method": "bdev_set_options", 00:33:35.237 "params": { 00:33:35.237 "bdev_io_pool_size": 65535, 00:33:35.237 "bdev_io_cache_size": 256, 00:33:35.238 "bdev_auto_examine": true, 00:33:35.238 "iobuf_small_cache_size": 128, 00:33:35.238 "iobuf_large_cache_size": 16 00:33:35.238 } 00:33:35.238 }, 00:33:35.238 { 00:33:35.238 "method": "bdev_raid_set_options", 00:33:35.238 "params": { 00:33:35.238 "process_window_size_kb": 1024, 00:33:35.238 "process_max_bandwidth_mb_sec": 0 00:33:35.238 } 00:33:35.238 }, 00:33:35.238 { 00:33:35.238 "method": "bdev_iscsi_set_options", 00:33:35.238 "params": { 00:33:35.238 "timeout_sec": 30 00:33:35.238 } 00:33:35.238 }, 00:33:35.238 { 00:33:35.238 "method": "bdev_nvme_set_options", 00:33:35.238 "params": { 00:33:35.238 "action_on_timeout": "none", 00:33:35.238 "timeout_us": 0, 00:33:35.238 "timeout_admin_us": 0, 00:33:35.238 "keep_alive_timeout_ms": 10000, 00:33:35.238 "arbitration_burst": 0, 00:33:35.238 "low_priority_weight": 0, 00:33:35.238 "medium_priority_weight": 0, 00:33:35.238 "high_priority_weight": 0, 00:33:35.238 "nvme_adminq_poll_period_us": 10000, 00:33:35.238 "nvme_ioq_poll_period_us": 0, 00:33:35.238 "io_queue_requests": 512, 00:33:35.238 "delay_cmd_submit": true, 00:33:35.238 "transport_retry_count": 4, 00:33:35.238 "bdev_retry_count": 3, 00:33:35.238 "transport_ack_timeout": 0, 00:33:35.238 "ctrlr_loss_timeout_sec": 0, 00:33:35.238 "reconnect_delay_sec": 0, 00:33:35.238 "fast_io_fail_timeout_sec": 0, 00:33:35.238 "disable_auto_failback": false, 00:33:35.238 "generate_uuids": false, 00:33:35.238 "transport_tos": 0, 00:33:35.238 "nvme_error_stat": false, 00:33:35.238 "rdma_srq_size": 0, 00:33:35.238 "io_path_stat": false, 00:33:35.238 "allow_accel_sequence": false, 00:33:35.238 "rdma_max_cq_size": 0, 00:33:35.238 "rdma_cm_event_timeout_ms": 0, 00:33:35.238 "dhchap_digests": [ 00:33:35.238 "sha256", 00:33:35.238 "sha384", 00:33:35.238 "sha512" 00:33:35.238 ], 00:33:35.238 "dhchap_dhgroups": [ 00:33:35.238 "null", 00:33:35.238 "ffdhe2048", 00:33:35.238 "ffdhe3072", 00:33:35.238 "ffdhe4096", 00:33:35.238 "ffdhe6144", 00:33:35.238 "ffdhe8192" 00:33:35.238 ] 00:33:35.238 } 00:33:35.238 }, 00:33:35.238 { 00:33:35.238 "method": "bdev_nvme_attach_controller", 00:33:35.238 "params": { 00:33:35.238 "name": "nvme0", 00:33:35.238 "trtype": "TCP", 00:33:35.238 "adrfam": "IPv4", 00:33:35.238 "traddr": "127.0.0.1", 00:33:35.238 "trsvcid": "4420", 00:33:35.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.238 "prchk_reftag": false, 00:33:35.238 "prchk_guard": false, 00:33:35.238 "ctrlr_loss_timeout_sec": 0, 00:33:35.238 "reconnect_delay_sec": 0, 00:33:35.238 "fast_io_fail_timeout_sec": 0, 00:33:35.238 "psk": "key0", 00:33:35.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.238 "hdgst": false, 00:33:35.238 "ddgst": false 00:33:35.238 } 00:33:35.238 }, 00:33:35.238 { 00:33:35.238 "method": "bdev_nvme_set_hotplug", 00:33:35.238 "params": { 00:33:35.238 "period_us": 100000, 00:33:35.238 "enable": false 00:33:35.238 } 00:33:35.238 }, 00:33:35.238 { 00:33:35.238 "method": "bdev_wait_for_examine" 00:33:35.238 } 00:33:35.238 ] 00:33:35.238 }, 00:33:35.238 { 00:33:35.238 "subsystem": "nbd", 00:33:35.238 "config": [] 00:33:35.238 } 00:33:35.238 ] 00:33:35.238 }' 00:33:35.238 07:39:42 keyring_file -- keyring/file.sh@114 -- # killprocess 332255 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 332255 ']' 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@954 -- # kill -0 332255 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332255 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332255' 00:33:35.238 killing process with pid 332255 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@969 -- # kill 332255 00:33:35.238 Received shutdown signal, test time was about 1.000000 seconds 00:33:35.238 00:33:35.238 Latency(us) 00:33:35.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.238 =================================================================================================================== 00:33:35.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.238 07:39:42 keyring_file -- common/autotest_common.sh@974 -- # wait 332255 00:33:35.499 07:39:42 keyring_file -- keyring/file.sh@117 -- # bperfpid=333936 00:33:35.499 07:39:42 keyring_file -- keyring/file.sh@119 -- # waitforlisten 333936 /var/tmp/bperf.sock 00:33:35.499 07:39:42 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 333936 ']' 00:33:35.499 07:39:42 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.499 07:39:42 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:35.499 07:39:42 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:35.499 07:39:42 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.499 07:39:42 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:35.499 07:39:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:35.499 07:39:42 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:35.499 "subsystems": [ 00:33:35.499 { 00:33:35.499 "subsystem": "keyring", 00:33:35.499 "config": [ 00:33:35.499 { 00:33:35.499 "method": "keyring_file_add_key", 00:33:35.499 "params": { 00:33:35.499 "name": "key0", 00:33:35.499 "path": "/tmp/tmp.lHKi06kGqD" 00:33:35.499 } 00:33:35.499 }, 00:33:35.499 { 00:33:35.499 "method": "keyring_file_add_key", 00:33:35.499 "params": { 00:33:35.499 "name": "key1", 00:33:35.499 "path": "/tmp/tmp.ZjDoAbmfy0" 00:33:35.499 } 00:33:35.499 } 00:33:35.499 ] 00:33:35.499 }, 00:33:35.499 { 00:33:35.499 "subsystem": "iobuf", 00:33:35.499 "config": [ 00:33:35.499 { 00:33:35.499 "method": "iobuf_set_options", 00:33:35.499 "params": { 00:33:35.499 "small_pool_count": 8192, 00:33:35.499 "large_pool_count": 1024, 00:33:35.499 "small_bufsize": 8192, 00:33:35.499 "large_bufsize": 135168 00:33:35.499 } 00:33:35.499 } 00:33:35.499 ] 00:33:35.499 }, 00:33:35.499 { 00:33:35.499 "subsystem": "sock", 00:33:35.499 "config": [ 00:33:35.499 { 00:33:35.499 "method": "sock_set_default_impl", 00:33:35.499 "params": { 00:33:35.499 "impl_name": "posix" 00:33:35.499 } 00:33:35.499 }, 00:33:35.499 { 00:33:35.499 "method": "sock_impl_set_options", 00:33:35.499 "params": { 00:33:35.499 "impl_name": "ssl", 00:33:35.499 "recv_buf_size": 4096, 00:33:35.499 "send_buf_size": 4096, 00:33:35.499 "enable_recv_pipe": true, 00:33:35.499 "enable_quickack": false, 00:33:35.499 "enable_placement_id": 0, 00:33:35.499 "enable_zerocopy_send_server": true, 00:33:35.499 "enable_zerocopy_send_client": false, 00:33:35.499 "zerocopy_threshold": 0, 00:33:35.499 "tls_version": 0, 00:33:35.499 "enable_ktls": false 00:33:35.499 } 00:33:35.499 }, 00:33:35.499 { 00:33:35.499 "method": "sock_impl_set_options", 00:33:35.499 "params": { 00:33:35.499 "impl_name": "posix", 00:33:35.500 "recv_buf_size": 2097152, 00:33:35.500 "send_buf_size": 2097152, 00:33:35.500 "enable_recv_pipe": true, 00:33:35.500 "enable_quickack": false, 00:33:35.500 "enable_placement_id": 0, 00:33:35.500 "enable_zerocopy_send_server": true, 00:33:35.500 "enable_zerocopy_send_client": false, 00:33:35.500 "zerocopy_threshold": 0, 00:33:35.500 "tls_version": 0, 00:33:35.500 "enable_ktls": false 00:33:35.500 } 00:33:35.500 } 00:33:35.500 ] 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "subsystem": "vmd", 00:33:35.500 "config": [] 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "subsystem": "accel", 00:33:35.500 "config": [ 00:33:35.500 { 00:33:35.500 "method": "accel_set_options", 00:33:35.500 "params": { 00:33:35.500 "small_cache_size": 128, 00:33:35.500 "large_cache_size": 16, 00:33:35.500 "task_count": 2048, 00:33:35.500 "sequence_count": 2048, 00:33:35.500 "buf_count": 2048 00:33:35.500 } 00:33:35.500 } 00:33:35.500 ] 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "subsystem": "bdev", 00:33:35.500 "config": [ 00:33:35.500 { 00:33:35.500 "method": "bdev_set_options", 00:33:35.500 "params": { 00:33:35.500 "bdev_io_pool_size": 65535, 00:33:35.500 "bdev_io_cache_size": 256, 00:33:35.500 "bdev_auto_examine": true, 00:33:35.500 "iobuf_small_cache_size": 128, 00:33:35.500 "iobuf_large_cache_size": 16 00:33:35.500 } 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "method": "bdev_raid_set_options", 00:33:35.500 "params": { 00:33:35.500 "process_window_size_kb": 1024, 00:33:35.500 "process_max_bandwidth_mb_sec": 0 00:33:35.500 } 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "method": "bdev_iscsi_set_options", 00:33:35.500 "params": { 00:33:35.500 "timeout_sec": 30 00:33:35.500 } 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "method": "bdev_nvme_set_options", 00:33:35.500 "params": { 00:33:35.500 "action_on_timeout": "none", 00:33:35.500 "timeout_us": 0, 00:33:35.500 "timeout_admin_us": 0, 00:33:35.500 "keep_alive_timeout_ms": 10000, 00:33:35.500 "arbitration_burst": 0, 00:33:35.500 "low_priority_weight": 0, 00:33:35.500 "medium_priority_weight": 0, 00:33:35.500 "high_priority_weight": 0, 00:33:35.500 "nvme_adminq_poll_period_us": 10000, 00:33:35.500 "nvme_ioq_poll_period_us": 0, 00:33:35.500 "io_queue_requests": 512, 00:33:35.500 "delay_cmd_submit": true, 00:33:35.500 "transport_retry_count": 4, 00:33:35.500 "bdev_retry_count": 3, 00:33:35.500 "transport_ack_timeout": 0, 00:33:35.500 "ctrlr_loss_timeout_sec": 0, 00:33:35.500 "reconnect_delay_sec": 0, 00:33:35.500 "fast_io_fail_timeout_sec": 0, 00:33:35.500 "disable_auto_failback": false, 00:33:35.500 "generate_uuids": false, 00:33:35.500 "transport_tos": 0, 00:33:35.500 "nvme_error_stat": false, 00:33:35.500 "rdma_srq_size": 0, 00:33:35.500 "io_path_stat": false, 00:33:35.500 "allow_accel_sequence": false, 00:33:35.500 "rdma_max_cq_size": 0, 00:33:35.500 "rdma_cm_event_timeout_ms": 0, 00:33:35.500 "dhchap_digests": [ 00:33:35.500 "sha256", 00:33:35.500 "sha384", 00:33:35.500 "sha512" 00:33:35.500 ], 00:33:35.500 "dhchap_dhgroups": [ 00:33:35.500 "null", 00:33:35.500 "ffdhe2048", 00:33:35.500 "ffdhe3072", 00:33:35.500 "ffdhe4096", 00:33:35.500 "ffdhe6144", 00:33:35.500 "ffdhe8192" 00:33:35.500 ] 00:33:35.500 } 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "method": "bdev_nvme_attach_controller", 00:33:35.500 "params": { 00:33:35.500 "name": "nvme0", 00:33:35.500 "trtype": "TCP", 00:33:35.500 "adrfam": "IPv4", 00:33:35.500 "traddr": "127.0.0.1", 00:33:35.500 "trsvcid": "4420", 00:33:35.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.500 "prchk_reftag": false, 00:33:35.500 "prchk_guard": false, 00:33:35.500 "ctrlr_loss_timeout_sec": 0, 00:33:35.500 "reconnect_delay_sec": 0, 00:33:35.500 "fast_io_fail_timeout_sec": 0, 00:33:35.500 "psk": "key0", 00:33:35.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:35.500 "hdgst": false, 00:33:35.500 "ddgst": false 00:33:35.500 } 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "method": "bdev_nvme_set_hotplug", 00:33:35.500 "params": { 00:33:35.500 "period_us": 100000, 00:33:35.500 "enable": false 00:33:35.500 } 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "method": "bdev_wait_for_examine" 00:33:35.500 } 00:33:35.500 ] 00:33:35.500 }, 00:33:35.500 { 00:33:35.500 "subsystem": "nbd", 00:33:35.500 "config": [] 00:33:35.500 } 00:33:35.500 ] 00:33:35.500 }' 00:33:35.500 [2024-07-25 07:39:42.737891] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:33:35.500 [2024-07-25 07:39:42.737949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333936 ] 00:33:35.500 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.500 [2024-07-25 07:39:42.812371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.500 [2024-07-25 07:39:42.865565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.761 [2024-07-25 07:39:43.007496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:36.330 07:39:43 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:36.330 07:39:43 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:36.330 07:39:43 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:36.330 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.330 07:39:43 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:36.330 07:39:43 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:36.330 07:39:43 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:36.330 07:39:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:36.330 07:39:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.330 07:39:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.330 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.330 07:39:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.590 07:39:43 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:36.590 07:39:43 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:36.590 07:39:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:36.590 07:39:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.590 07:39:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.590 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.590 07:39:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:36.850 07:39:43 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:36.850 07:39:43 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:36.850 07:39:43 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:36.850 07:39:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:36.850 07:39:44 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:36.850 07:39:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:36.850 07:39:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lHKi06kGqD /tmp/tmp.ZjDoAbmfy0 00:33:36.850 07:39:44 keyring_file -- keyring/file.sh@20 -- # killprocess 333936 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 333936 ']' 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@954 -- # kill -0 333936 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 333936 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 333936' 00:33:36.850 killing process with pid 333936 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@969 -- # kill 333936 00:33:36.850 Received shutdown signal, test time was about 1.000000 seconds 00:33:36.850 00:33:36.850 Latency(us) 00:33:36.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.850 =================================================================================================================== 00:33:36.850 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:36.850 07:39:44 keyring_file -- common/autotest_common.sh@974 -- # wait 333936 00:33:37.110 07:39:44 keyring_file -- keyring/file.sh@21 -- # killprocess 332116 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 332116 ']' 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@954 -- # kill -0 332116 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 332116 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 332116' 00:33:37.110 killing process with pid 332116 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@969 -- # kill 332116 00:33:37.110 [2024-07-25 07:39:44.332675] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:37.110 07:39:44 keyring_file -- common/autotest_common.sh@974 -- # wait 332116 00:33:37.371 00:33:37.371 real 0m11.105s 00:33:37.371 user 0m26.074s 00:33:37.371 sys 0m2.549s 00:33:37.371 07:39:44 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:37.371 07:39:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:37.371 ************************************ 00:33:37.371 END TEST keyring_file 00:33:37.371 ************************************ 00:33:37.371 07:39:44 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:33:37.371 07:39:44 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:37.371 07:39:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:37.371 07:39:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:37.371 07:39:44 -- common/autotest_common.sh@10 -- # set +x 00:33:37.371 ************************************ 00:33:37.371 START TEST keyring_linux 00:33:37.371 ************************************ 00:33:37.371 07:39:44 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:37.371 * Looking for test storage... 00:33:37.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:37.371 07:39:44 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:37.371 07:39:44 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.371 07:39:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.632 07:39:44 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.632 07:39:44 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.632 07:39:44 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.632 07:39:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.632 07:39:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.632 07:39:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.632 07:39:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:37.632 07:39:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:37.632 07:39:44 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:37.632 07:39:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:37.632 07:39:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:37.632 07:39:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:37.632 07:39:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:37.632 07:39:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:37.632 07:39:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:37.632 07:39:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:37.633 /tmp/:spdk-test:key0 00:33:37.633 07:39:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:37.633 07:39:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:37.633 07:39:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:37.633 /tmp/:spdk-test:key1 00:33:37.633 07:39:44 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:37.633 07:39:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=334489 00:33:37.633 07:39:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 334489 00:33:37.633 07:39:44 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 334489 ']' 00:33:37.633 07:39:44 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.633 07:39:44 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:37.633 07:39:44 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.633 07:39:44 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:37.633 07:39:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:37.633 [2024-07-25 07:39:44.890047] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:33:37.633 [2024-07-25 07:39:44.890120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334489 ] 00:33:37.633 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.633 [2024-07-25 07:39:44.956628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.893 [2024-07-25 07:39:45.030962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:38.465 07:39:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:38.465 [2024-07-25 07:39:45.672024] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.465 null0 00:33:38.465 [2024-07-25 07:39:45.704076] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:38.465 [2024-07-25 07:39:45.704468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.465 07:39:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:38.465 865050283 00:33:38.465 07:39:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:38.465 815421488 00:33:38.465 07:39:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=334505 00:33:38.465 07:39:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 334505 /var/tmp/bperf.sock 00:33:38.465 07:39:45 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 334505 ']' 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:38.465 07:39:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:38.465 [2024-07-25 07:39:45.779078] Starting SPDK v24.09-pre git sha1 223450b47 / DPDK 24.03.0 initialization... 00:33:38.465 [2024-07-25 07:39:45.779126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334505 ] 00:33:38.465 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.726 [2024-07-25 07:39:45.854871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.726 [2024-07-25 07:39:45.908934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.297 07:39:46 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:39.297 07:39:46 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:39.297 07:39:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:39.297 07:39:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:39.558 07:39:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:39.558 07:39:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:39.558 07:39:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:39.558 07:39:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:39.818 [2024-07-25 07:39:47.032106] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:39.818 nvme0n1 00:33:39.818 07:39:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:39.818 07:39:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:39.818 07:39:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:39.818 07:39:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:39.818 07:39:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:39.818 07:39:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.078 07:39:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:40.078 07:39:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:40.078 07:39:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:40.078 07:39:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:40.078 07:39:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.078 07:39:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.078 07:39:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:40.339 07:39:47 keyring_linux -- keyring/linux.sh@25 -- # sn=865050283 00:33:40.339 07:39:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:40.339 07:39:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:40.339 07:39:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 865050283 == \8\6\5\0\5\0\2\8\3 ]] 00:33:40.339 07:39:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 865050283 00:33:40.339 07:39:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:40.339 07:39:47 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.339 Running I/O for 1 seconds... 00:33:41.281 00:33:41.281 Latency(us) 00:33:41.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.281 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:41.281 nvme0n1 : 1.01 7044.86 27.52 0.00 0.00 18041.18 5625.17 24248.32 00:33:41.281 =================================================================================================================== 00:33:41.281 Total : 7044.86 27.52 0.00 0.00 18041.18 5625.17 24248.32 00:33:41.281 0 00:33:41.281 07:39:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:41.281 07:39:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:41.542 07:39:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:41.542 07:39:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.542 07:39:48 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:41.542 07:39:48 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.542 07:39:48 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:41.542 07:39:48 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.542 07:39:48 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:41.542 07:39:48 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.542 07:39:48 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.542 07:39:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:41.802 [2024-07-25 07:39:49.033812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:41.802 [2024-07-25 07:39:49.033962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc13170 (107): Transport endpoint is not connected 00:33:41.802 [2024-07-25 07:39:49.034957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc13170 (9): Bad file descriptor 00:33:41.802 [2024-07-25 07:39:49.035958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:41.802 [2024-07-25 07:39:49.035964] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:41.802 [2024-07-25 07:39:49.035969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:41.802 request: 00:33:41.802 { 00:33:41.802 "name": "nvme0", 00:33:41.802 "trtype": "tcp", 00:33:41.802 "traddr": "127.0.0.1", 00:33:41.802 "adrfam": "ipv4", 00:33:41.802 "trsvcid": "4420", 00:33:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:41.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:41.802 "prchk_reftag": false, 00:33:41.802 "prchk_guard": false, 00:33:41.802 "hdgst": false, 00:33:41.802 "ddgst": false, 00:33:41.802 "psk": ":spdk-test:key1", 00:33:41.802 "method": "bdev_nvme_attach_controller", 00:33:41.802 "req_id": 1 00:33:41.802 } 00:33:41.802 Got JSON-RPC error response 00:33:41.802 response: 00:33:41.802 { 00:33:41.802 "code": -5, 00:33:41.802 "message": "Input/output error" 00:33:41.802 } 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@33 -- # sn=865050283 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 865050283 00:33:41.802 1 links removed 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@33 -- # sn=815421488 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 815421488 00:33:41.802 1 links removed 00:33:41.802 07:39:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 334505 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 334505 ']' 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 334505 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 334505 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:41.802 07:39:49 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 334505' 00:33:41.802 killing process with pid 334505 00:33:41.803 07:39:49 keyring_linux -- common/autotest_common.sh@969 -- # kill 334505 00:33:41.803 Received shutdown signal, test time was about 1.000000 seconds 00:33:41.803 00:33:41.803 Latency(us) 00:33:41.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.803 =================================================================================================================== 00:33:41.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.803 07:39:49 keyring_linux -- common/autotest_common.sh@974 -- # wait 334505 00:33:42.063 07:39:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 334489 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 334489 ']' 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 334489 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 334489 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 334489' 00:33:42.063 killing process with pid 334489 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@969 -- # kill 334489 00:33:42.063 07:39:49 keyring_linux -- common/autotest_common.sh@974 -- # wait 334489 00:33:42.324 00:33:42.324 real 0m4.871s 00:33:42.324 user 0m8.453s 00:33:42.324 sys 0m1.217s 00:33:42.324 07:39:49 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:42.324 07:39:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:42.324 ************************************ 00:33:42.324 END TEST keyring_linux 00:33:42.324 ************************************ 00:33:42.324 07:39:49 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:33:42.324 07:39:49 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:42.324 07:39:49 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:42.324 07:39:49 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:42.324 07:39:49 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:33:42.324 07:39:49 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:33:42.324 07:39:49 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:33:42.324 07:39:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:42.324 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:33:42.324 07:39:49 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:33:42.324 07:39:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:42.324 07:39:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:42.324 07:39:49 -- common/autotest_common.sh@10 -- # set +x 00:33:50.471 INFO: APP EXITING 00:33:50.471 INFO: killing all VMs 00:33:50.471 INFO: killing vhost app 00:33:50.471 WARN: no vhost pid file found 00:33:50.471 INFO: EXIT DONE 00:33:53.063 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:53.330 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:53.330 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:53.591 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:53.591 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:53.591 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:53.591 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:53.591 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:57.799 Cleaning 00:33:57.799 Removing: /var/run/dpdk/spdk0/config 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:57.799 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:57.799 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:57.799 Removing: /var/run/dpdk/spdk1/config 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:57.799 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:57.799 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:57.799 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:57.799 Removing: /var/run/dpdk/spdk2/config 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:57.799 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:57.799 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:57.799 Removing: /var/run/dpdk/spdk3/config 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:57.799 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:57.799 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:57.799 Removing: /var/run/dpdk/spdk4/config 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:57.799 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:57.799 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:57.799 Removing: /dev/shm/bdev_svc_trace.1 00:33:57.799 Removing: /dev/shm/nvmf_trace.0 00:33:57.799 Removing: /dev/shm/spdk_tgt_trace.pid4076671 00:33:57.799 Removing: /var/run/dpdk/spdk0 00:33:57.799 Removing: /var/run/dpdk/spdk1 00:33:57.799 Removing: /var/run/dpdk/spdk2 00:33:57.799 Removing: /var/run/dpdk/spdk3 00:33:57.799 Removing: /var/run/dpdk/spdk4 00:33:57.799 Removing: /var/run/dpdk/spdk_pid122039 00:33:57.799 Removing: /var/run/dpdk/spdk_pid127350 00:33:57.799 Removing: /var/run/dpdk/spdk_pid129221 00:33:57.799 Removing: /var/run/dpdk/spdk_pid131450 00:33:57.799 Removing: /var/run/dpdk/spdk_pid131786 00:33:57.799 Removing: /var/run/dpdk/spdk_pid131873 00:33:57.799 Removing: /var/run/dpdk/spdk_pid132141 00:33:57.799 Removing: /var/run/dpdk/spdk_pid132752 00:33:57.799 Removing: /var/run/dpdk/spdk_pid134879 00:33:57.799 Removing: /var/run/dpdk/spdk_pid135952 00:33:57.799 Removing: /var/run/dpdk/spdk_pid136333 00:33:57.799 Removing: /var/run/dpdk/spdk_pid139037 00:33:57.799 Removing: /var/run/dpdk/spdk_pid139740 00:33:57.799 Removing: /var/run/dpdk/spdk_pid140454 00:33:57.799 Removing: /var/run/dpdk/spdk_pid145592 00:33:57.799 Removing: /var/run/dpdk/spdk_pid157503 00:33:57.799 Removing: /var/run/dpdk/spdk_pid162335 00:33:57.799 Removing: /var/run/dpdk/spdk_pid170118 00:33:57.799 Removing: /var/run/dpdk/spdk_pid171632 00:33:57.799 Removing: /var/run/dpdk/spdk_pid173455 00:33:57.799 Removing: /var/run/dpdk/spdk_pid178571 00:33:57.799 Removing: /var/run/dpdk/spdk_pid183285 00:33:57.799 Removing: /var/run/dpdk/spdk_pid192352 00:33:57.799 Removing: /var/run/dpdk/spdk_pid192354 00:33:57.799 Removing: /var/run/dpdk/spdk_pid197396 00:33:57.799 Removing: /var/run/dpdk/spdk_pid197725 00:33:57.799 Removing: /var/run/dpdk/spdk_pid197958 00:33:57.799 Removing: /var/run/dpdk/spdk_pid198418 00:33:57.799 Removing: /var/run/dpdk/spdk_pid198423 00:33:57.799 Removing: /var/run/dpdk/spdk_pid203974 00:33:57.799 Removing: /var/run/dpdk/spdk_pid204618 00:33:57.799 Removing: /var/run/dpdk/spdk_pid209845 00:33:57.799 Removing: /var/run/dpdk/spdk_pid213136 00:33:57.799 Removing: /var/run/dpdk/spdk_pid219566 00:33:57.799 Removing: /var/run/dpdk/spdk_pid226607 00:33:57.799 Removing: /var/run/dpdk/spdk_pid236508 00:33:57.799 Removing: /var/run/dpdk/spdk_pid244844 00:33:57.799 Removing: /var/run/dpdk/spdk_pid244857 00:33:57.799 Removing: /var/run/dpdk/spdk_pid267408 00:33:57.799 Removing: /var/run/dpdk/spdk_pid268259 00:33:57.799 Removing: /var/run/dpdk/spdk_pid268998 00:33:57.799 Removing: /var/run/dpdk/spdk_pid269684 00:33:57.799 Removing: /var/run/dpdk/spdk_pid270744 00:33:57.799 Removing: /var/run/dpdk/spdk_pid271429 00:33:57.799 Removing: /var/run/dpdk/spdk_pid272127 00:33:57.799 Removing: /var/run/dpdk/spdk_pid272888 00:33:57.799 Removing: /var/run/dpdk/spdk_pid278421 00:33:57.799 Removing: /var/run/dpdk/spdk_pid278762 00:33:57.799 Removing: /var/run/dpdk/spdk_pid286032 00:33:57.799 Removing: /var/run/dpdk/spdk_pid286174 00:33:57.799 Removing: /var/run/dpdk/spdk_pid288930 00:33:57.799 Removing: /var/run/dpdk/spdk_pid296095 00:33:57.799 Removing: /var/run/dpdk/spdk_pid296101 00:33:57.799 Removing: /var/run/dpdk/spdk_pid301965 00:33:57.799 Removing: /var/run/dpdk/spdk_pid304314 00:33:57.799 Removing: /var/run/dpdk/spdk_pid306673 00:33:57.799 Removing: /var/run/dpdk/spdk_pid308053 00:33:57.799 Removing: /var/run/dpdk/spdk_pid310391 00:33:57.799 Removing: /var/run/dpdk/spdk_pid311849 00:33:57.799 Removing: /var/run/dpdk/spdk_pid321512 00:33:57.799 Removing: /var/run/dpdk/spdk_pid322180 00:33:57.799 Removing: /var/run/dpdk/spdk_pid322893 00:33:57.799 Removing: /var/run/dpdk/spdk_pid326259 00:33:57.799 Removing: /var/run/dpdk/spdk_pid326735 00:33:57.799 Removing: /var/run/dpdk/spdk_pid327376 00:33:57.799 Removing: /var/run/dpdk/spdk_pid332116 00:33:57.799 Removing: /var/run/dpdk/spdk_pid332255 00:33:57.799 Removing: /var/run/dpdk/spdk_pid333936 00:33:57.799 Removing: /var/run/dpdk/spdk_pid334489 00:33:57.799 Removing: /var/run/dpdk/spdk_pid334505 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4075130 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4076671 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4077223 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4078410 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4078582 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4079822 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4079978 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4080302 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4081231 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4082008 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4082321 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4082601 00:33:57.799 Removing: /var/run/dpdk/spdk_pid4082888 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4083258 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4083618 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4083967 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4084179 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4085412 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4088670 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4089038 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4089401 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4089608 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4090104 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4090123 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4090657 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4090829 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4091189 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4091221 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4091563 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4091661 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4092233 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4092392 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4092760 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4097248 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4102517 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4114930 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4115615 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4120768 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4121255 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4126341 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4133161 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4136261 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4148744 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4159454 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4161619 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4162896 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4183647 00:33:57.800 Removing: /var/run/dpdk/spdk_pid4188382 00:33:57.800 Removing: /var/run/dpdk/spdk_pid48065 00:33:57.800 Removing: /var/run/dpdk/spdk_pid54352 00:33:57.800 Removing: /var/run/dpdk/spdk_pid61340 00:33:57.800 Removing: /var/run/dpdk/spdk_pid68545 00:33:57.800 Removing: /var/run/dpdk/spdk_pid68547 00:33:57.800 Removing: /var/run/dpdk/spdk_pid69554 00:33:57.800 Removing: /var/run/dpdk/spdk_pid70555 00:33:57.800 Removing: /var/run/dpdk/spdk_pid71567 00:33:57.800 Removing: /var/run/dpdk/spdk_pid72236 00:33:57.800 Removing: /var/run/dpdk/spdk_pid72241 00:33:57.800 Removing: /var/run/dpdk/spdk_pid72576 00:33:57.800 Removing: /var/run/dpdk/spdk_pid72585 00:33:57.800 Removing: /var/run/dpdk/spdk_pid72663 00:33:57.800 Removing: /var/run/dpdk/spdk_pid73738 00:33:57.800 Removing: /var/run/dpdk/spdk_pid74747 00:33:57.800 Removing: /var/run/dpdk/spdk_pid75826 00:33:57.800 Removing: /var/run/dpdk/spdk_pid76444 00:33:57.800 Removing: /var/run/dpdk/spdk_pid76574 00:33:57.800 Removing: /var/run/dpdk/spdk_pid76814 00:33:58.061 Removing: /var/run/dpdk/spdk_pid78167 00:33:58.061 Removing: /var/run/dpdk/spdk_pid80020 00:33:58.061 Removing: /var/run/dpdk/spdk_pid89998 00:33:58.061 Clean 00:33:58.061 07:40:05 -- common/autotest_common.sh@1451 -- # return 0 00:33:58.061 07:40:05 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:33:58.061 07:40:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:58.061 07:40:05 -- common/autotest_common.sh@10 -- # set +x 00:33:58.061 07:40:05 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:33:58.061 07:40:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:58.061 07:40:05 -- common/autotest_common.sh@10 -- # set +x 00:33:58.061 07:40:05 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:58.061 07:40:05 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:58.061 07:40:05 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:58.061 07:40:05 -- spdk/autotest.sh@395 -- # hash lcov 00:33:58.061 07:40:05 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:58.061 07:40:05 -- spdk/autotest.sh@397 -- # hostname 00:33:58.061 07:40:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:58.322 geninfo: WARNING: invalid characters removed from testname! 00:34:24.905 07:40:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:25.166 07:40:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:27.081 07:40:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:28.467 07:40:35 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:29.852 07:40:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:31.797 07:40:38 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:33.184 07:40:40 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:33.184 07:40:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.184 07:40:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:33.184 07:40:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.184 07:40:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.184 07:40:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.184 07:40:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.184 07:40:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.184 07:40:40 -- paths/export.sh@5 -- $ export PATH 00:34:33.184 07:40:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.184 07:40:40 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:33.184 07:40:40 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:33.184 07:40:40 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721886040.XXXXXX 00:34:33.184 07:40:40 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721886040.APZJ9z 00:34:33.184 07:40:40 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:33.184 07:40:40 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:33.184 07:40:40 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:33.184 07:40:40 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:33.184 07:40:40 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:33.184 07:40:40 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:33.184 07:40:40 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:34:33.184 07:40:40 -- common/autotest_common.sh@10 -- $ set +x 00:34:33.184 07:40:40 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:33.184 07:40:40 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:33.184 07:40:40 -- pm/common@17 -- $ local monitor 00:34:33.184 07:40:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.184 07:40:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.184 07:40:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.184 07:40:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:33.184 07:40:40 -- pm/common@21 -- $ date +%s 00:34:33.184 07:40:40 -- pm/common@25 -- $ sleep 1 00:34:33.184 07:40:40 -- pm/common@21 -- $ date +%s 00:34:33.184 07:40:40 -- pm/common@21 -- $ date +%s 00:34:33.184 07:40:40 -- pm/common@21 -- $ date +%s 00:34:33.184 07:40:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721886040 00:34:33.184 07:40:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721886040 00:34:33.184 07:40:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721886040 00:34:33.184 07:40:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721886040 00:34:33.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721886040_collect-vmstat.pm.log 00:34:33.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721886040_collect-cpu-load.pm.log 00:34:33.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721886040_collect-cpu-temp.pm.log 00:34:33.184 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721886040_collect-bmc-pm.bmc.pm.log 00:34:34.126 07:40:41 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:34.126 07:40:41 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:34.126 07:40:41 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:34.127 07:40:41 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:34.127 07:40:41 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:34.127 07:40:41 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:34.127 07:40:41 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:34.127 07:40:41 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:34.127 07:40:41 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:34.127 07:40:41 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:34.127 07:40:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:34.127 07:40:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:34.127 07:40:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:34.127 07:40:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.127 07:40:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:34.127 07:40:41 -- pm/common@44 -- $ pid=346888 00:34:34.127 07:40:41 -- pm/common@50 -- $ kill -TERM 346888 00:34:34.127 07:40:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.127 07:40:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:34.127 07:40:41 -- pm/common@44 -- $ pid=346889 00:34:34.127 07:40:41 -- pm/common@50 -- $ kill -TERM 346889 00:34:34.127 07:40:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.127 07:40:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:34.127 07:40:41 -- pm/common@44 -- $ pid=346891 00:34:34.127 07:40:41 -- pm/common@50 -- $ kill -TERM 346891 00:34:34.127 07:40:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:34.127 07:40:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:34.127 07:40:41 -- pm/common@44 -- $ pid=346914 00:34:34.127 07:40:41 -- pm/common@50 -- $ sudo -E kill -TERM 346914 00:34:34.388 + [[ -n 3954678 ]] 00:34:34.388 + sudo kill 3954678 00:34:34.399 [Pipeline] } 00:34:34.418 [Pipeline] // stage 00:34:34.424 [Pipeline] } 00:34:34.441 [Pipeline] // timeout 00:34:34.447 [Pipeline] } 00:34:34.465 [Pipeline] // catchError 00:34:34.471 [Pipeline] } 00:34:34.491 [Pipeline] // wrap 00:34:34.498 [Pipeline] } 00:34:34.513 [Pipeline] // catchError 00:34:34.523 [Pipeline] stage 00:34:34.526 [Pipeline] { (Epilogue) 00:34:34.541 [Pipeline] catchError 00:34:34.543 [Pipeline] { 00:34:34.557 [Pipeline] echo 00:34:34.559 Cleanup processes 00:34:34.565 [Pipeline] sh 00:34:34.858 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:34.858 346995 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:34.858 347437 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:34.871 [Pipeline] sh 00:34:35.156 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:35.156 ++ grep -v 'sudo pgrep' 00:34:35.156 ++ awk '{print $1}' 00:34:35.156 + sudo kill -9 346995 00:34:35.169 [Pipeline] sh 00:34:35.456 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:47.704 [Pipeline] sh 00:34:47.992 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:47.992 Artifacts sizes are good 00:34:48.009 [Pipeline] archiveArtifacts 00:34:48.016 Archiving artifacts 00:34:48.255 [Pipeline] sh 00:34:48.544 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:48.560 [Pipeline] cleanWs 00:34:48.571 [WS-CLEANUP] Deleting project workspace... 00:34:48.571 [WS-CLEANUP] Deferred wipeout is used... 00:34:48.578 [WS-CLEANUP] done 00:34:48.580 [Pipeline] } 00:34:48.595 [Pipeline] // catchError 00:34:48.607 [Pipeline] sh 00:34:48.896 + logger -p user.info -t JENKINS-CI 00:34:48.906 [Pipeline] } 00:34:48.924 [Pipeline] // stage 00:34:48.931 [Pipeline] } 00:34:48.949 [Pipeline] // node 00:34:48.955 [Pipeline] End of Pipeline 00:34:48.990 Finished: SUCCESS